Sample records for low-dimensional statistical models

  1. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  2. Dynamic colloidal assembly pathways via low dimensional models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yuguang; Bevan, Michael A., E-mail: mabevan@jhu.edu; Thyagarajan, Raghuram

    2016-05-28

    Here we construct a low-dimensional Smoluchowski model for electric field mediated colloidal crystallization using Brownian dynamic simulations, which were previously matched to experiments. Diffusion mapping is used to infer dimensionality and confirm the use of two order parameters, one for degree of condensation and one for global crystallinity. Free energy and diffusivity landscapes are obtained as the coefficients of a low-dimensional Smoluchowski equation to capture the thermodynamics and kinetics of microstructure evolution. The resulting low-dimensional model quantitatively captures the dynamics of different assembly pathways between fluid, polycrystal, and single crystals states, in agreement with the full N-dimensional data as characterizedmore » by first passage time distributions. Numerical solution of the low-dimensional Smoluchowski equation reveals statistical properties of the dynamic evolution of states vs. applied field amplitude and system size. The low-dimensional Smoluchowski equation and associated landscapes calculated here can serve as models for predictive control of electric field mediated assembly of colloidal ensembles into two-dimensional crystalline objects.« less

  3. Low-Dimensional Statistics of Anatomical Variability via Compact Representation of Image Deformations.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2016-10-01

    Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).

  4. Two-dimensional random surface model for asperity-contact in elastohydrodynamic lubrication

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Sidik, S. M.

    1979-01-01

    Relations for the asperity-contact time function during elastohydrodynamic lubrication of a ball bearing are presented. The analysis is based on a two-dimensional random surface model, and actual profile traces of the bearing surfaces are used as statistical sample records. The results of the analysis show that transition from 90 percent contact to 1 percent contact occurs within a dimensionless film thickness range of approximately four to five. This thickness ratio is several times large than reported in the literature where one-dimensional random surface models were used. It is shown that low pass filtering of the statistical records will bring agreement between the present results and those in the literature.

  5. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    NASA Astrophysics Data System (ADS)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  6. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  7. Blended particle filters for large-dimensional chaotic dynamical systems

    PubMed Central

    Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.

    2014-01-01

    A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886

  8. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.

  9. Extraction of process zones and low-dimensional attractive subspaces in stochastic fracture mechanics

    PubMed Central

    Kerfriden, P.; Schmidt, K.M.; Rabczuk, T.; Bordas, S.P.A.

    2013-01-01

    We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model. PMID:27069423

  10. Development of a Localized Low-Dimensional Approach to Turbulence Simulation

    NASA Astrophysics Data System (ADS)

    Juttijudata, Vejapong; Rempfer, Dietmar; Lumley, John

    2000-11-01

    Our previous study has shown that the localized low-dimensional model derived from a projection of Navier-Stokes equations onto a set of one-dimensional scalar POD modes, with boundary conditions at y^+=40, can predict wall turbulence accurately for short times while failing to give a stable long-term solution. The structures obtained from the model and later studies suggest our boundary conditions from DNS are not consistent with the solution from the localized model resulting in an injection of energy at the top boundary. In the current study, we develop low-dimensional models using one-dimensional scalar POD modes derived from an explicitly filtered DNS. This model problem has exact no-slip boundary conditions at both walls while the locality of the wall layer is still retained. Furthermore, the interaction between wall and core region is attenuated via an explicit filter which allows us to investigate the quality of the model without requiring complicated modeling of the top boundary conditions. The full-channel model gives reasonable wall turbulence structures as well as long-term turbulent statistics while still having difficulty with the prediction of the mean velocity profile farther from the wall. We also consider a localized model with modified boundary conditions in the last part of our study.

  11. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<

  12. XCOM intrinsic dimensionality for low-Z elements at diagnostic energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornefalk, Hans

    2012-02-15

    Purpose: To determine the intrinsic dimensionality of linear attenuation coefficients (LACs) from XCOM for elements with low atomic number (Z = 1-20) at diagnostic x-ray energies (25-120 keV). H{sub 0}{sup q}, the hypothesis that the space of LACs is spanned by q bases, is tested for various q-values. Methods: Principal component analysis is first applied and the LACs are projected onto the first q principal component bases. The residuals of the model values vs XCOM data are determined for all energies and atomic numbers. Heteroscedasticity invalidates the prerequisite of i.i.d. errors necessary for bootstrapping residuals. Instead wild bootstrap is applied,more » which, by not mixing residuals, allows the effect of the non-i.i.d residuals to be reflected in the result. Credible regions for the eigenvalues of the correlation matrix for the bootstrapped LAC data are determined. If subsequent credible regions for the eigenvalues overlap, the corresponding principal component is not considered to represent true data structure but noise. If this happens for eigenvalues l and l + 1, for any l{<=}q, H{sub 0}{sup q} is rejected. Results: The largest value of q for which H{sub 0}{sup q} is nonrejectable at the 5%-level is q = 4. This indicates that the statistically significant intrinsic dimensionality of low-Z XCOM data at diagnostic energies is four. Conclusions: The method presented allows determination of the statistically significant dimensionality of any noisy linear subspace. Knowledge of such significant dimensionality is of interest for any method making assumptions on intrinsic dimensionality and evaluating results on noisy reference data. For LACs, knowledge of the low-Z dimensionality might be relevant when parametrization schemes are tuned to XCOM data. For x-ray imaging techniques based on the basis decomposition method (Alvarez and Macovski, Phys. Med. Biol. 21, 733-744, 1976), an underlying dimensionality of two is commonly assigned to the LAC of human tissue at diagnostic energies. The finding of a higher statistically significant dimensionality thus raises the question whether a higher assumed model dimensionality (now feasible with the advent of multibin x-ray systems) might also be practically relevant, i.e., if better tissue characterization results can be obtained.« less

  13. Selected topics in high energy physics: Flavon, neutrino and extra-dimensional models

    NASA Astrophysics Data System (ADS)

    Dorsner, Ilja

    There is already significant evidence, both experimental and theoretical, that the Standard Model of elementary particle physics is just another effective physical theory. Thus, it is crucial (a) to anticipate the experiments in search for signatures of the physics beyond the Standard Model, and (b) whether some theoretically preferred structure can reproduce the low-energy signature of the Standard Model. This work pursues these two directions by investigating various extensions of the Standard Model. One of them is a simple flavon model that accommodates the observed hierarchy of the charged fermion masses and mixings. We show that flavor changing and CP violating signatures of this model are equally near the present experimental limits. We find that, for a significant range of parameters, mu-e conversion can be the most sensitive place to look for such signatures. We then propose two variants of an SO(10) model in five-dimensional framework. The first variant demonstrates that one can embed a four-dimensional flipped SU(5) model into a five-dimensional SO(10) model. This allows one to maintain the advantages of flipped SU(5) while avoiding its well-known drawbacks. The second variant shows that exact unification of the gauge couplings is possible even in the higher dimensional setting. This unification yields low-energy values of the gauge couplings that are in a perfect agreement with experimental values. We show that the corrections to the usual four-dimensional running, due to the Kaluza-Klein towers of states, can be unambiguously and systematically evaluated. We also consider the various main types of models of neutrino masses and mixings from the point of view of how naturally they give the large mixing angle MSW solution to the solar neutrino problem. Special attention is given to one particular "lopsided" SU(5) model, which is then analyzed in a completely statistical manner. We suggest that this sort of statistical analysis should be applicable to other models of neutrino mixing.

  14. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    PubMed

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  15. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  17. A statistical mechanical model of economics

    NASA Astrophysics Data System (ADS)

    Lubbers, Nicholas Edward Williams

    Statistical mechanics pursues low-dimensional descriptions of systems with a very large number of degrees of freedom. I explore this theme in two contexts. The main body of this dissertation explores and extends the Yard Sale Model (YSM) of economic transactions using a combination of simulations and theory. The YSM is a simple interacting model for wealth distributions which has the potential to explain the empirical observation of Pareto distributions of wealth. I develop the link between wealth condensation and the breakdown of ergodicity due to nonlinear diffusion effects which are analogous to the geometric random walk. Using this, I develop a deterministic effective theory of wealth transfer in the YSM that is useful for explaining many quantitative results. I introduce various forms of growth to the model, paying attention to the effect of growth on wealth condensation, inequality, and ergodicity. Arithmetic growth is found to partially break condensation, and geometric growth is found to completely break condensation. Further generalizations of geometric growth with growth in- equality show that the system is divided into two phases by a tipping point in the inequality parameter. The tipping point marks the line between systems which are ergodic and systems which exhibit wealth condensation. I explore generalizations of the YSM transaction scheme to arbitrary betting functions to develop notions of universality in YSM-like models. I find that wealth vi condensation is universal to a large class of models which can be divided into two phases. The first exhibits slow, power-law condensation dynamics, and the second exhibits fast, finite-time condensation dynamics. I find that the YSM, which exhibits exponential dynamics, is the critical, self-similar model which marks the dividing line between the two phases. The final chapter develops a low-dimensional approach to materials microstructure quantification. Modern materials design harnesses complex microstructure effects to develop high-performance materials, but general microstructure quantification is an unsolved problem. Motivated by statistical physics, I envision microstructure as a low-dimensional manifold, and construct this manifold by leveraging multiple machine learning approaches including transfer learning, dimensionality reduction, and computer vision breakthroughs with convolutional neural networks.

  18. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.; Brandão, Fernando; Preskill, John; Svore, Krysta M.

    2018-05-01

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p3DCC (1 )≃1.9 % and p3DCC (2 )≃27.6 % . We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  19. The semantic representation of prejudice and stereotypes.

    PubMed

    Bhatia, Sudeep

    2017-07-01

    We use a theory of semantic representation to study prejudice and stereotyping. Particularly, we consider large datasets of newspaper articles published in the United States, and apply latent semantic analysis (LSA), a prominent model of human semantic memory, to these datasets to learn representations for common male and female, White, African American, and Latino names. LSA performs a singular value decomposition on word distribution statistics in order to recover word vector representations, and we find that our recovered representations display the types of biases observed in human participants using tasks such as the implicit association test. Importantly, these biases are strongest for vector representations with moderate dimensionality, and weaken or disappear for representations with very high or very low dimensionality. Moderate dimensional LSA models are also the best at learning race, ethnicity, and gender-based categories, suggesting that social category knowledge, acquired through dimensionality reduction on word distribution statistics, can facilitate prejudiced and stereotyped associations. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  1. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models.

    PubMed

    Ding, Jiarui; Condon, Anne; Shah, Sohrab P

    2018-05-21

    Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.

  2. Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach

    DTIC Science & Technology

    2013-09-30

    statistically extratropical storms and extremes, and link these to LFV modes. Mingfang Ting, Yochanan Kushnir, Andrew W. Robertson, Lei Wang...forecast models, as well as in the understanding they have generated. Adam Sobel, Daehyun Kim and Shuguang Wang. Extratropical variability and...predictability. Determine the extent to which extratropical monthly and seasonal low-frequency variability (LFV, i.e. PNA, NAO, as well as other regional

  3. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  4. Power Enhancement in High Dimensional Cross-Sectional Tests

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Yao, Jiawei

    2016-01-01

    We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846

  5. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping.

    PubMed

    Kubica, Aleksander; Beverland, Michael E; Brandão, Fernando; Preskill, John; Svore, Krysta M

    2018-05-04

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p_{3DCC}^{(1)}≃1.9% and p_{3DCC}^{(2)}≃27.6%. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  6. A weighted U-statistic for genetic association analyses of sequencing data.

    PubMed

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.

  7. Fractional exclusion and braid statistics in one dimension: a study via dimensional reduction of Chern-Simons theory

    NASA Astrophysics Data System (ADS)

    Ye, Fei; Marchetti, P. A.; Su, Z. B.; Yu, L.

    2017-09-01

    The relation between braid and exclusion statistics is examined in one-dimensional systems, within the framework of Chern-Simons statistical transmutation in gauge invariant form with an appropriate dimensional reduction. If the matter action is anomalous, as for chiral fermions, a relation between braid and exclusion statistics can be established explicitly for both mutual and nonmutual cases. However, if it is not anomalous, the exclusion statistics of emergent low energy excitations is not necessarily connected to the braid statistics of the physical charged fields of the system. Finally, we also discuss the bosonization of one-dimensional anyonic systems through T-duality. Dedicated to the memory of Mario Tonin.

  8. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation.

    PubMed

    Augustin, Moritz; Ladenbauer, Josef; Baumann, Fabian; Obermayer, Klaus

    2017-06-01

    The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.

  9. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation

    PubMed Central

    Baumann, Fabian; Obermayer, Klaus

    2017-01-01

    The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models. PMID:28644841

  10. On the explicit construction of Parisi landscapes in finite dimensional Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Fyodorov, Y. V.; Bouchaud, J.-P.

    2007-12-01

    An N-dimensional Gaussian landscape with multiscale translation-invariant logarithmic correlations has been constructed, and the statistical mechanics of a single particle in this environment has been investigated. In the limit of a high dimensional N → ∞, the free energy of the system in the thermodynamic limit coincides with the most general version of Derrida’s generalized random energy model. The low-temperature behavior depends essentially on the spectrum of length scales involved in the construction of the landscape. The construction is argued to be valid in any finite spatial dimensions N ≥1.

  11. Viscous Dissipation in One-Dimensional Quantum Liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matveev, K. A.; Pustilnik, M.

    We develop a theory of viscous dissipation in one-dimensional single-component quantum liquids at low temperatures. Such liquids are characterized by a single viscosity coefficient, the bulk viscosity. We show that for a generic interaction between the constituent particles this viscosity diverges in the zerotemperature limit. In the special case of integrable models, the viscosity is infinite at any temperature, which can be interpreted as a breakdown of the hydrodynamic description. In conclusion, our consideration is applicable to all single-component Galilean- invariant one-dimensional quantum liquids, regardless of the statistics of the constituent particles and the interaction strength.

  12. Viscous Dissipation in One-Dimensional Quantum Liquids

    DOE PAGES

    Matveev, K. A.; Pustilnik, M.

    2017-07-20

    We develop a theory of viscous dissipation in one-dimensional single-component quantum liquids at low temperatures. Such liquids are characterized by a single viscosity coefficient, the bulk viscosity. We show that for a generic interaction between the constituent particles this viscosity diverges in the zerotemperature limit. In the special case of integrable models, the viscosity is infinite at any temperature, which can be interpreted as a breakdown of the hydrodynamic description. In conclusion, our consideration is applicable to all single-component Galilean- invariant one-dimensional quantum liquids, regardless of the statistics of the constituent particles and the interaction strength.

  13. SPReM: Sparse Projection Regression Model For High-dimensional Linear Regression *

    PubMed Central

    Sun, Qiang; Zhu, Hongtu; Liu, Yufeng; Ibrahim, Joseph G.

    2014-01-01

    The aim of this paper is to develop a sparse projection regression modeling (SPReM) framework to perform multivariate regression modeling with a large number of responses and a multivariate covariate of interest. We propose two novel heritability ratios to simultaneously perform dimension reduction, response selection, estimation, and testing, while explicitly accounting for correlations among multivariate responses. Our SPReM is devised to specifically address the low statistical power issue of many standard statistical approaches, such as the Hotelling’s T2 test statistic or a mass univariate analysis, for high-dimensional data. We formulate the estimation problem of SPREM as a novel sparse unit rank projection (SURP) problem and propose a fast optimization algorithm for SURP. Furthermore, we extend SURP to the sparse multi-rank projection (SMURP) by adopting a sequential SURP approximation. Theoretically, we have systematically investigated the convergence properties of SURP and the convergence rate of SURP estimates. Our simulation results and real data analysis have shown that SPReM out-performs other state-of-the-art methods. PMID:26527844

  14. Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems.

    PubMed

    Sapsis, Themistoklis P; Majda, Andrew J

    2013-08-20

    A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.

  15. A Three-Dimensional Statistical Average Skull: Application of Biometric Morphing in Generating Missing Anatomy.

    PubMed

    Teshima, Tara Lynn; Patel, Vaibhav; Mainprize, James G; Edwards, Glenn; Antonyshyn, Oleh M

    2015-07-01

    The utilization of three-dimensional modeling technology in craniomaxillofacial surgery has grown exponentially during the last decade. Future development, however, is hindered by the lack of a normative three-dimensional anatomic dataset and a statistical mean three-dimensional virtual model. The purpose of this study is to develop and validate a protocol to generate a statistical three-dimensional virtual model based on a normative dataset of adult skulls. Two hundred adult skull CT images were reviewed. The average three-dimensional skull was computed by processing each CT image in the series using thin-plate spline geometric morphometric protocol. Our statistical average three-dimensional skull was validated by reconstructing patient-specific topography in cranial defects. The experiment was repeated 4 times. In each case, computer-generated cranioplasties were compared directly to the original intact skull. The errors describing the difference between the prediction and the original were calculated. A normative database of 33 adult human skulls was collected. Using 21 anthropometric landmark points, a protocol for three-dimensional skull landmarking and data reduction was developed and a statistical average three-dimensional skull was generated. Our results show the root mean square error (RMSE) for restoration of a known defect using the native best match skull, our statistical average skull, and worst match skull was 0.58, 0.74, and 4.4  mm, respectively. The ability to statistically average craniofacial surface topography will be a valuable instrument for deriving missing anatomy in complex craniofacial defects and deficiencies as well as in evaluating morphologic results of surgery.

  16. Statistical Downscaling in Multi-dimensional Wave Climate Forecast

    NASA Astrophysics Data System (ADS)

    Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.

    2009-04-01

    Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.

  17. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  18. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    PubMed

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. Understanding and Predicting Geomagnetic Dipole Reversals Via Low Dimensional Models and Data Assimilation

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Fournier, A.; Hulot, G.

    2014-12-01

    We investigate the geophysical relevance of low-dimensional models of the geomagnetic dipole fieldby comparing these models to the signed relative paleomagnetic intensity over the past 2 Myr.The comparison is done via Bayesian statistics, implemented numerically by Monte Carlo (MC) sampling.We consider several MC schemes, as well as two data sets to show the robustness of our approach with respect to its numerical implementation and to the details of how the data are collected.The data we consider are the Sint-2000 [1] and PADM2M [2] data sets.We consider three stochastic differential equation (SDE) models and one deterministic model. Experiments with synthetic data show that it is feasible that a low dimensional modelcan learn the geophysical state from data of only the dipole field,and reveal the limitations of the low-dimensional models.For example, the G12 model [3] (a deterministic model that generates dipole reversals by crisis induced intermittency)can only match either one of the two important time scales we find in the data. The MC sampling approach also allows usto use the models to make predictions of the dipole field.We assess how reliably dipole reversals can be predictedwith our approach by hind-casting five reversals documented over the past 2 Myr. We find that, besides its limitations, G12 can be used to predict reversals reliably,however only with short lead times and over short horizons. The scalar SDE models on the other hand are not useful for prediction of dipole reversals.References Valet, J.P., Maynadier,L and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. Ziegler, L.B., Constable, C.G., Johnson, C.L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089. Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137.

  20. Risk patterns and correlated brain activities. Multidimensional statistical analysis of FMRI data in economic decision making study.

    PubMed

    van Bömmel, Alena; Song, Song; Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K

    2014-07-01

    Decision making usually involves uncertainty and risk. Understanding which parts of the human brain are activated during decisions under risk and which neural processes underly (risky) investment decisions are important goals in neuroeconomics. Here, we analyze functional magnetic resonance imaging (fMRI) data on 17 subjects who were exposed to an investment decision task from Mohr, Biele, Krugel, Li, and Heekeren (in NeuroImage 49, 2556-2563, 2010b). We obtain a time series of three-dimensional images of the blood-oxygen-level dependent (BOLD) fMRI signals. We apply a panel version of the dynamic semiparametric factor model (DSFM) presented in Park, Mammen, Wolfgang, and Borak (in Journal of the American Statistical Association 104(485), 284-298, 2009) and identify task-related activations in space and dynamics in time. With the panel DSFM (PDSFM) we can capture the dynamic behavior of the specific brain regions common for all subjects and represent the high-dimensional time-series data in easily interpretable low-dimensional dynamic factors without large loss of variability. Further, we classify the risk attitudes of all subjects based on the estimated low-dimensional time series. Our classification analysis successfully confirms the estimated risk attitudes derived directly from subjects' decision behavior.

  1. Strongly magnetized classical plasma models

    NASA Technical Reports Server (NTRS)

    Montgomery, D. C.

    1972-01-01

    The class of plasma processes for which the so-called Vlasov approximation is inadequate is investigated. Results from the equilibrium statistical mechanics of two-dimensional plasmas are derived. These results are independent of the presence of an external dc magnetic field. The nonequilibrium statistical mechanics of the electrostatic guiding-center plasma, a two-dimensional plasma model, is discussed. This model is then generalized to three dimensions. The guiding-center model is relaxed to include finite Larmor radius effects for a two-dimensional plasma.

  2. Beating the curse of dimension with accurate statistics for the Fokker-Planck equation in complex turbulent systems.

    PubMed

    Chen, Nan; Majda, Andrew J

    2017-12-05

    Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.

  3. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    NASA Astrophysics Data System (ADS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  4. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baraffe, I.; Pratt, J.; Goffrey, T.

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less

  5. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  6. A Maximum Entropy Method for Particle Filtering

    NASA Astrophysics Data System (ADS)

    Eyink, Gregory L.; Kim, Sangil

    2006-06-01

    Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.

  7. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  8. Practical limits on muscle synergy identification by non-negative matrix factorization in systems with mechanical constraints.

    PubMed

    Burkholder, Thomas J; van Antwerp, Keith W

    2013-02-01

    Statistical decomposition, including non-negative matrix factorization (NMF), is a convenient tool for identifying patterns of structured variability within behavioral motor programs, but it is unclear how the resolved factors relate to actual neural structures. Factors can be extracted from a uniformly sampled, low-dimension command space. In practical application, the command space is limited, either to those activations that perform some task(s) successfully or to activations induced in response to specific perturbations. NMF was applied to muscle activation patterns synthesized from low dimensional, synergy-like control modules mimicking simple task performance or feedback activation from proprioceptive signals. In the task-constrained paradigm, the accuracy of control module recovery was highly dependent on the sampled volume of control space, such that sampling even 50% of control space produced a substantial degradation in factor accuracy. In the feedback paradigm, NMF was not capable of extracting more than four control modules, even in a mechanical model with seven internal degrees of freedom. Reduced access to the low-dimensional control space imposed by physical constraints may result in substantial distortion of an existing low dimensional controller, such that neither the dimensionality nor the composition of the recovered/extracted factors match the original controller.

  9. Statistical mechanics of two-dimensional shuffled foams: Geometry-topology correlation in small or large disorder limits

    NASA Astrophysics Data System (ADS)

    Durand, Marc; Kraynik, Andrew M.; van Swol, Frank; Käfer, Jos; Quilliet, Catherine; Cox, Simon; Ataei Talebi, Shirin; Graner, François

    2014-06-01

    Bubble monolayers are model systems for experiments and simulations of two-dimensional packing problems of deformable objects. We explore the relation between the distributions of the number of bubble sides (topology) and the bubble areas (geometry) in the low liquid fraction limit. We use a statistical model [M. Durand, Europhys. Lett. 90, 60002 (2010), 10.1209/0295-5075/90/60002] which takes into account Plateau laws. We predict the correlation between geometrical disorder (bubble size dispersity) and topological disorder (width of bubble side number distribution) over an extended range of bubble size dispersities. Extensive data sets arising from shuffled foam experiments, surface evolver simulations, and cellular Potts model simulations all collapse surprisingly well and coincide with the model predictions, even at extremely high size dispersity. At moderate size dispersity, we recover our earlier approximate predictions [M. Durand, J. Kafer, C. Quilliet, S. Cox, S. A. Talebi, and F. Graner, Phys. Rev. Lett. 107, 168304 (2011), 10.1103/PhysRevLett.107.168304]. At extremely low dispersity, when approaching the perfectly regular honeycomb pattern, we study how both geometrical and topological disorders vanish. We identify a crystallization mechanism and explore it quantitatively in the case of bidisperse foams. Due to the deformability of the bubbles, foams can crystallize over a larger range of size dispersities than hard disks. The model predicts that the crystallization transition occurs when the ratio of largest to smallest bubble radii is 1.4.

  10. Quantitative validation of carbon-fiber laminate low velocity impact simulations

    DOE PAGES

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    2015-09-26

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  11. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  12. Statistical investigation of avalanches of three-dimensional small-world networks and their boundary and bulk cross-sections

    NASA Astrophysics Data System (ADS)

    Najafi, M. N.; Dashti-Naserabadi, H.

    2018-03-01

    In many situations we are interested in the propagation of energy in some portions of a three-dimensional system with dilute long-range links. In this paper, a sandpile model is defined on the three-dimensional small-world network with real dissipative boundaries and the energy propagation is studied in three dimensions as well as the two-dimensional cross-sections. Two types of cross-sections are defined in the system, one in the bulk and another in the system boundary. The motivation of this is to make clear how the statistics of the avalanches in the bulk cross-section tend to the statistics of the dissipative avalanches, defined in the boundaries as the concentration of long-range links (α ) increases. This trend is numerically shown to be a power law in a manner described in the paper. Two regimes of α are considered in this work. For sufficiently small α s the dominant behavior of the system is just like that of the regular BTW, whereas for the intermediate values the behavior is nontrivial with some exponents that are reported in the paper. It is shown that the spatial extent up to which the statistics is similar to the regular BTW model scales with α just like the dissipative BTW model with the dissipation factor (mass in the corresponding ghost model) m2˜α for the three-dimensional system as well as its two-dimensional cross-sections.

  13. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less

  14. Collision efficiency of water in the unimolecular reaction CH4 (+H2O) ⇆ CH3 + H (+H2O): one-dimensional and two-dimensional solutions of the low-pressure-limit master equation.

    PubMed

    Jasper, Ahren W; Miller, James A; Klippenstein, Stephen J

    2013-11-27

    The low-pressure-limit unimolecular decomposition of methane, CH4 (+M) ⇆ CH3 + H (+M), is characterized via low-order moments of the total energy, E, and angular momentum, J, transferred due to collisions. The low-order moments are calculated using ensembles of classical trajectories, with new direct dynamics results for M = H2O and new results for M = O2 compared with previous results for several typical atomic (M = He, Ne, Ar, Kr) and diatomic (M = H2 and N2) bath gases and one polyatomic bath gas, M = CH4. The calculated moments are used to parametrize three different models of the energy transfer function, from which low-pressure-limit rate coefficients for dissociation, k0, are calculated. Both one-dimensional and two-dimensional collisional energy transfer models are considered. The collision efficiency for M = H2O relative to the other bath gases (defined as the ratio of low-pressure limit rate coefficients) is found to depend on temperature, with, e.g., k0(H2O)/k0(Ar) = 7 at 2000 K but only 3 at 300 K. We also consider the rotational collision efficiency of the various baths. Water is the only bath gas found to fully equilibrate rotations, and only at temperatures below 1000 K. At elevated temperatures, the kinetic effect of "weak-collider-in-J" collisions is found to be small. At room temperature, however, the use of an explicitly two-dimensional master equation model that includes weak-collider-in-J effects predicts smaller rate coefficients by 50% relative to the use of a statistical model for rotations. The accuracies of several methods for predicting relative collision efficiencies that do not require solving the master equation and that are based on the calculated low-order moments are tested. Troe's weak collider efficiency, βc, includes the effect of saturation of collision outcomes above threshold and accurately predicts the relative collision efficiencies of the nine baths. Finally, a brief discussion is presented of mechanistic details of the energy transfer process, as inferred from the trajectories.

  15. From Principal Component to Direct Coupling Analysis of Coevolution in Proteins: Low-Eigenvalue Modes are Needed for Structure Prediction

    PubMed Central

    Cocco, Simona; Monasson, Remi; Weigt, Martin

    2013-01-01

    Various approaches have explored the covariation of residues in multiple-sequence alignments of homologous proteins to extract functional and structural information. Among those are principal component analysis (PCA), which identifies the most correlated groups of residues, and direct coupling analysis (DCA), a global inference method based on the maximum entropy principle, which aims at predicting residue-residue contacts. In this paper, inspired by the statistical physics of disordered systems, we introduce the Hopfield-Potts model to naturally interpolate between these two approaches. The Hopfield-Potts model allows us to identify relevant ‘patterns’ of residues from the knowledge of the eigenmodes and eigenvalues of the residue-residue correlation matrix. We show how the computation of such statistical patterns makes it possible to accurately predict residue-residue contacts with a much smaller number of parameters than DCA. This dimensional reduction allows us to avoid overfitting and to extract contact information from multiple-sequence alignments of reduced size. In addition, we show that low-eigenvalue correlation modes, discarded by PCA, are important to recover structural information: the corresponding patterns are highly localized, that is, they are concentrated in few sites, which we find to be in close contact in the three-dimensional protein fold. PMID:23990764

  16. Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs

    NASA Astrophysics Data System (ADS)

    Harvey, David Benjamin Paul

    A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.

  17. Quantifying uncertainty in climate change science through empirical information theory.

    PubMed

    Majda, Andrew J; Gershgorin, Boris

    2010-08-24

    Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.

  18. Strategies for Reduced-Order Models in Uncertainty Quantification of Complex Turbulent Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Qi, Di

    Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.

  19. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  20. Statistical thermodynamics of a two-dimensional relativistic gas.

    PubMed

    Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood

    2009-03-01

    In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).

  1. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  2. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.

  3. An M-estimator for reduced-rank system identification.

    PubMed

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T

    2017-01-15

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.

  4. An M-estimator for reduced-rank system identification

    PubMed Central

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.

    2018-01-01

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659

  5. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    NASA Astrophysics Data System (ADS)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  6. Restoration of dimensional reduction in the random-field Ising model at five dimensions.

    PubMed

    Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  7. A latent class distance association model for cross-classified data with a categorical response variable.

    PubMed

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  8. Directional Statistics for Polarization Observations of Individual Pulses from Radio Pulsars

    NASA Astrophysics Data System (ADS)

    McKinnon, M. M.

    2010-10-01

    Radio polarimetry is a three-dimensional statistical problem. The three-dimensional aspect of the problem arises from the Stokes parameters Q, U, and V, which completely describe the polarization of electromagnetic radiation and conceptually define the orientation of a polarization vector in the Poincaré sphere. The statistical aspect of the problem arises from the random fluctuations in the source-intrinsic polarization and the instrumental noise. A simple model for the polarization of pulsar radio emission has been used to derive the three-dimensional statistics of radio polarimetry. The model is based upon the proposition that the observed polarization is due to the incoherent superposition of two, highly polarized, orthogonal modes. The directional statistics derived from the model follow the Bingham-Mardia and Fisher family of distributions. The model assumptions are supported by the qualitative agreement between the statistics derived from it and those measured with polarization observations of the individual pulses from pulsars. The orthogonal modes are thought to be the natural modes of radio wave propagation in the pulsar magnetosphere. The intensities of the modes become statistically independent when generalized Faraday rotation (GFR) in the magnetosphere causes the difference in their phases to be large. A stochastic version of GFR occurs when fluctuations in the phase difference are also large, and may be responsible for the more complicated polarization patterns observed in pulsar radio emission.

  9. Nonlinear dynamic mechanism of vocal tremor from voice analysis and model simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Jiang, Jack J.

    2008-09-01

    Nonlinear dynamic analysis and model simulations are used to study the nonlinear dynamic characteristics of vocal folds with vocal tremor, which can typically be characterized by low-frequency modulation and aperiodicity. Tremor voices from patients with disorders such as paresis, Parkinson's disease, hyperfunction, and adductor spasmodic dysphonia show low-dimensional characteristics, differing from random noise. Correlation dimension analysis statistically distinguishes tremor voices from normal voices. Furthermore, a nonlinear tremor model is proposed to study the vibrations of the vocal folds with vocal tremor. Fractal dimensions and positive Lyapunov exponents demonstrate the evidence of chaos in the tremor model, where amplitude and frequency play important roles in governing vocal fold dynamics. Nonlinear dynamic voice analysis and vocal fold modeling may provide a useful set of tools for understanding the dynamic mechanism of vocal tremor in patients with laryngeal diseases.

  10. Multivariate Strategies in Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hansen, Lars Kai

    2007-01-01

    We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.

  11. Loop models, modular invariance, and three-dimensional bosonization

    NASA Astrophysics Data System (ADS)

    Goldman, Hart; Fradkin, Eduardo

    2018-05-01

    We consider a family of quantum loop models in 2+1 spacetime dimensions with marginally long-ranged and statistical interactions mediated by a U (1 ) gauge field, both purely in 2+1 dimensions and on a surface in a (3+1)-dimensional bulk system. In the absence of fractional spin, these theories have been shown to be self-dual under particle-vortex duality and shifts of the statistical angle of the loops by 2 π , which form a subgroup of the modular group, PSL (2 ,Z ) . We show that careful consideration of fractional spin in these theories completely breaks their statistical periodicity and describe how this occurs, resolving a disagreement with the conformal field theories they appear to approach at criticality. We show explicitly that incorporation of fractional spin leads to loop model dualities which parallel the recent web of (2+1)-dimensional field theory dualities, providing a nontrivial check on its validity.

  12. Low-dimensional representation of near-wall dynamics in shear flows, with implications to wall-models.

    PubMed

    Schmid, P J; Sayadi, T

    2017-03-13

    The dynamics of coherent structures near the wall of a turbulent boundary layer is investigated with the aim of a low-dimensional representation of its essential features. Based on a triple decomposition into mean, coherent and incoherent motion and a dynamic mode decomposition to recover statistical information about the incoherent part of the flow field, a driven linear system coupling first- and second-order moments of the coherent structures is derived and analysed. The transfer function for this system, evaluated for a wall-parallel plane, confirms a strong bias towards streamwise elongated structures, and is proposed as an 'impedance' boundary condition which replaces the bulk of the transport between the coherent velocity field and the coherent Reynolds stresses, thus acting as a wall model for large-eddy simulations (LES). It is interesting to note that the boundary condition is non-local in space and time. The extracted model is capable of reproducing the principal Reynolds stress components for the pretransitional, transitional and fully turbulent boundary layer.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  13. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  14. Steganalysis of recorded speech

    NASA Astrophysics Data System (ADS)

    Johnson, Micah K.; Lyu, Siwei; Farid, Hany

    2005-03-01

    Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.

  15. Quantum stream instability in coupled two-dimensional plasmas

    NASA Astrophysics Data System (ADS)

    Akbari-Moghanjoughi, M.

    2014-08-01

    In this paper the quantum counter-streaming instability problem is studied in planar two-dimensional (2D) quantum plasmas using the coupled quantum hydrodynamic (CQHD) model which incorporates the most important quantum features such as the statistical Fermi-Dirac electron pressure, the electron-exchange potential and the quantum diffraction effect. The instability is investigated for different 2D quantum electron systems using the dynamics of Coulomb-coupled carriers on each plasma sheet when these plasmas are both monolayer doped graphene or metalfilm (corresponding to 2D Dirac or Fermi electron fluids). It is revealed that there are fundamental differences between these two cases regarding the effects of Bohm's quantum potential and the electron-exchange on the instability criteria. These differences mark yet another interesting feature of the effect of the energy band dispersion of Dirac electrons in graphene. Moreover, the effects of plasma number-density and coupling parameter on the instability criteria are shown to be significant. This study is most relevant to low dimensional graphene-based field-effect-transistor (FET) devices. The current study helps in understanding the collective interactions of the low-dimensional coupled ballistic conductors and the nanofabrication of future graphene-based integrated circuits.

  16. High-dimensional statistical inference: From vector to matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.

  17. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  18. Model-based iterative reconstruction in low-dose CT colonography-feasibility study in 65 patients for symptomatic investigation.

    PubMed

    Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl

    2015-05-01

    To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  19. The physicist's companion to current fluctuations: one-dimensional bulk-driven lattice gases

    NASA Astrophysics Data System (ADS)

    Lazarescu, Alexandre

    2015-12-01

    One of the main features of statistical systems out of equilibrium is the currents they exhibit in their stationary state: microscopic currents of probability between configurations, which translate into macroscopic currents of mass, charge, etc. Understanding the general behaviour of these currents is an important step towards building a universal framework for non-equilibrium steady states akin to the Gibbs-Boltzmann distribution for equilibrium systems. In this review, we consider one-dimensional bulk-driven particle gases, and in particular the asymmetric simple exclusion process (ASEP) with open boundaries, which is one of the most popular models of one-dimensional transport. We focus, in particular, on the current of particles flowing through the system in its steady state, and on its fluctuations. We show how one can obtain the complete statistics of that current, through its large deviation function, by combining results from various methods: exact calculation of the cumulants of the current, using the integrability of the model; direct diagonalization of a biased process in the limits of very high or low current; hydrodynamic description of the model in the continuous limit using the macroscopic fluctuation theory. We give a pedagogical account of these techniques, starting with a quick introduction to the necessary mathematical tools, as well as a short overview of the existing works relating to the ASEP. We conclude by drawing the complete dynamical phase diagram of the current. We also remark on a few possible generalizations of these results.

  20. Theoretical approaches to the steady-state statistical physics of interacting dissipative units

    NASA Astrophysics Data System (ADS)

    Bertin, Eric

    2017-02-01

    The aim of this review is to provide a concise overview of some of the generic approaches that have been developed to deal with the statistical description of large systems of interacting dissipative ‘units’. The latter notion includes, e.g. inelastic grains, active or self-propelled particles, bubbles in a foam, low-dimensional dynamical systems like driven oscillators, or even spatially extended modes like Fourier modes of the velocity field in a fluid. We first review methods based on the statistical properties of a single unit, starting with elementary mean-field approximations, either static or dynamic, that describe a unit embedded in a ‘self-consistent’ environment. We then discuss how this basic mean-field approach can be extended to account for spatial dependences, in the form of space-dependent mean-field Fokker-Planck equations, for example. We also briefly review the use of kinetic theory in the framework of the Boltzmann equation, which is an appropriate description for dilute systems. We then turn to descriptions in terms of the full N-body distribution, starting from exact solutions of one-dimensional models, using a matrix-product ansatz method when correlations are present. Since exactly solvable models are scarce, we also present some approximation methods which can be used to determine the N-body distribution in a large system of dissipative units. These methods include the Edwards approach for dense granular matter and the approximate treatment of multiparticle Langevin equations with colored noise, which models systems of self-propelled particles. Throughout this review, emphasis is put on methodological aspects of the statistical modeling and on formal similarities between different physical problems, rather than on the specific behavior of a given system.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  2. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    PubMed

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  3. Supersymmetric dS/CFT

    NASA Astrophysics Data System (ADS)

    Hertog, Thomas; Tartaglino-Mazzucchelli, Gabriele; Van Riet, Thomas; Venken, Gerben

    2018-02-01

    We put forward new explicit realisations of dS/CFT that relate N = 2 supersymmetric Euclidean vector models with reversed spin-statistics in three dimensions to specific supersymmetric Vasiliev theories in four-dimensional de Sitter space. The partition function of the free supersymmetric vector model deformed by a range of low spin deformations that preserve supersymmetry appears to specify a well-defined wave function with asymptotic de Sitter boundary conditions in the bulk. In particular we find the wave function is globally peaked at undeformed de Sitter space, with a low amplitude for strong deformations. This suggests that supersymmetric de Sitter space is stable in higher-spin gravity and in particular free from ghosts. We speculate this is a limiting case of the de Sitter realizations in exotic string theories.

  4. Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding

    PubMed Central

    2018-01-01

    Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669

  5. A plasma source driven predator-prey like mechanism as a potential cause of spiraling intermittencies in linear plasma devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, D.; Ohno, N.; Tanaka, H.

    2014-03-15

    Three-dimensional global drift fluid simulations are carried out to analyze coherent plasma structures appearing in the NAGDIS-II linear device (nagoya divertor plasma Simulator-II). The numerical simulations reproduce several features of the intermittent spiraling structures observed, for instance, statistical properties, rotation frequency, and the frequency of plasma expulsion. The detailed inspection of the three-dimensional plasma dynamics allows to identify the key mechanism behind the formation of these intermittent events. The resistive coupling between electron pressure and parallel electric field in the plasma source region gives rise to a quasilinear predator-prey like dynamics where the axisymmetric mode represents the prey and themore » spiraling structure with low azimuthal mode number represents the predator. This interpretation is confirmed by a reduced one-dimensional quasilinear model derived on the basis of the findings in the full three-dimensional simulations. The dominant dynamics reveals certain similarities to the classical Lotka-Volterra cycle.« less

  6. Tooth-size discrepancy: A comparison between manual and digital methods

    PubMed Central

    Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge

    2014-01-01

    Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529

  7. Taxometric Analysis as a General Strategy for Distinguishing Categorical from Dimensional Latent Structure

    ERIC Educational Resources Information Center

    McGrath, Robert E.; Walters, Glenn D.

    2012-01-01

    Statistical analyses investigating latent structure can be divided into those that estimate structural model parameters and those that detect the structural model type. The most basic distinction among structure types is between categorical (discrete) and dimensional (continuous) models. It is a common, and potentially misleading, practice to…

  8. Ising model of cardiac thin filament activation with nearest-neighbor cooperative interactions

    NASA Technical Reports Server (NTRS)

    Rice, John Jeremy; Stolovitzky, Gustavo; Tu, Yuhai; de Tombe, Pieter P.; Bers, D. M. (Principal Investigator)

    2003-01-01

    We have developed a model of cardiac thin filament activation using an Ising model approach from equilibrium statistical physics. This model explicitly represents nearest-neighbor interactions between 26 troponin/tropomyosin units along a one-dimensional array that represents the cardiac thin filament. With transition rates chosen to match experimental data, the results show that the resulting force-pCa (F-pCa) relations are similar to Hill functions with asymmetries, as seen in experimental data. Specifically, Hill plots showing (log(F/(1-F)) vs. log [Ca]) reveal a steeper slope below the half activation point (Ca(50)) compared with above. Parameter variation studies show interplay of parameters that affect the apparent cooperativity and asymmetry in the F-pCa relations. The model also predicts that Ca binding is uncooperative for low [Ca], becomes steeper near Ca(50), and becomes uncooperative again at higher [Ca]. The steepness near Ca(50) mirrors the steep F-pCa as a result of thermodynamic considerations. The model also predicts that the correlation between troponin/tropomyosin units along the one-dimensional array quickly decays at high and low [Ca], but near Ca(50), high correlation occurs across the whole array. This work provides a simple model that can account for the steepness and shape of F-pCa relations that other models fail to reproduce.

  9. Network Data: Statistical Theory and New Models

    DTIC Science & Technology

    2016-02-17

    SECURITY CLASSIFICATION OF: During this period of review, Bin Yu worked on many thrusts of high-dimensional statistical theory and methodologies. Her...research covered a wide range of topics in statistics including analysis and methods for spectral clustering for sparse and structured networks...2,7,8,21], sparse modeling (e.g. Lasso) [4,10,11,17,18,19], statistical guarantees for the EM algorithm [3], statistical analysis of algorithm leveraging

  10. Statistical mechanics of shell models for two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Boffetta, G.; Crisanti, A.; Frick, P.; Paladin, G.; Vulpiani, A.

    1994-12-01

    We study shell models that conserve the analogs of energy and enstrophy and hence are designed to mimic fluid turbulence in two-dimensions (2D). The main result is that the observed state is well described as a formal statistical equilibrium, closely analogous to the approach to two-dimensional ideal hydrodynamics of Onsager [Nuovo Cimento Suppl. 6, 279 (1949)], Hopf [J. Rat. Mech. Anal. 1, 87 (1952)], and Lee [Q. Appl. Math. 10, 69 (1952)]. In the presence of forcing and dissipation we observe a forward flux of enstrophy and a backward flux of energy. These fluxes can be understood as mean diffusive drifts from a source to two sinks in a system which is close to local equilibrium with Lagrange multipliers (``shell temperatures'') changing slowly with scale. This is clear evidence that the simplest shell models are not adequate to reproduce the main features of two-dimensional turbulence. The dimensional predictions on the power spectra from a supposed forward cascade of enstrophy and from one branch of the formal statistical equilibrium coincide in these shell models in contrast to the corresponding predictions for the Navier-Stokes and Euler equations in 2D. This coincidence has previously led to the mistaken conclusion that shell models exhibit a forward cascade of enstrophy. We also study the dynamical properties of the models and the growth of perturbations.

  11. Three Dimensional CFD Analysis of the GTX Combustor

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; Bond, R. B.; Edwards, J. R.

    2002-01-01

    The annular combustor geometry of a combined-cycle engine has been analyzed with three-dimensional computational fluid dynamics. Both subsonic combustion and supersonic combustion flowfields have been simulated. The subsonic combustion analysis was executed in conjunction with a direct-connect test rig. Two cold-flow and one hot-flow results are presented. The simulations compare favorably with the test data for the two cold flow calculations; the hot-flow data was not yet available. The hot-flow simulation indicates that the conventional ejector-ramjet cycle would not provide adequate mixing at the conditions tested. The supersonic combustion ramjet flowfield was simulated with frozen chemistry model. A five-parameter test matrix was specified, according to statistical design-of-experiments theory. Twenty-seven separate simulations were used to assemble surrogate models for combustor mixing efficiency and total pressure recovery. ScramJet injector design parameters (injector angle, location, and fuel split) as well as mission variables (total fuel massflow and freestream Mach number) were included in the analysis. A promising injector design has been identified that provides good mixing characteristics with low total pressure losses. The surrogate models can be used to develop performance maps of different injector designs. Several complex three-way variable interactions appear within the dataset that are not adequately resolved with the current statistical analysis.

  12. Probing the exchange statistics of one-dimensional anyon models

    NASA Astrophysics Data System (ADS)

    Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis

    2018-05-01

    We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.

  13. Two-dimensional models as testing ground for principles and concepts of local quantum physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroer, Bert

    In the past two-dimensional models of QFT have served as theoretical laboratories for testing new concepts under mathematically controllable condition. In more recent times low-dimensional models (e.g., chiral models, factorizing models) often have been treated by special recipes in a way which sometimes led to a loss of unity of QFT. In the present work, I try to counteract this apartheid tendency by reviewing past results within the setting of the general principles of QFT. To this I add two new ideas: (1) a modular interpretation of the chiral model Diff(S)-covariance with a close connection to the recently formulated localmore » covariance principle for QFT in curved spacetime and (2) a derivation of the chiral model temperature duality from a suitable operator formulation of the angular Wick rotation (in analogy to the Nelson-Symanzik duality in the Ostertwalder-Schrader setting) for rational chiral theories. The SL (2, Z) modular Verlinde relation is a special case of this thermal duality and (within the family of rational models) the matrix S appearing in the thermal duality relation becomes identified with the statistics character matrix S. The relevant angular 'Euclideanization' is done in the setting of the Tomita-Takesaki modular formalism of operator algebras. I find it appropriate to dedicate this work to the memory of J.A. Swieca with whom I shared the interest in two-dimensional models as a testing ground for QFT for more than one decade. This is a significantly extended version of an 'Encyclopedia of Mathematical Physics' contribution hep-th/0502125.« less

  14. Two-dimensional models as testing ground for principles and concepts of local quantum physics

    NASA Astrophysics Data System (ADS)

    Schroer, Bert

    2006-02-01

    In the past two-dimensional models of QFT have served as theoretical laboratories for testing new concepts under mathematically controllable condition. In more recent times low-dimensional models (e.g., chiral models, factorizing models) often have been treated by special recipes in a way which sometimes led to a loss of unity of QFT. In the present work, I try to counteract this apartheid tendency by reviewing past results within the setting of the general principles of QFT. To this I add two new ideas: (1) a modular interpretation of the chiral model Diff( S)-covariance with a close connection to the recently formulated local covariance principle for QFT in curved spacetime and (2) a derivation of the chiral model temperature duality from a suitable operator formulation of the angular Wick rotation (in analogy to the Nelson-Symanzik duality in the Ostertwalder-Schrader setting) for rational chiral theories. The SL (2, Z) modular Verlinde relation is a special case of this thermal duality and (within the family of rational models) the matrix S appearing in the thermal duality relation becomes identified with the statistics character matrix S. The relevant angular "Euclideanization" is done in the setting of the Tomita-Takesaki modular formalism of operator algebras. I find it appropriate to dedicate this work to the memory of J.A. Swieca with whom I shared the interest in two-dimensional models as a testing ground for QFT for more than one decade. This is a significantly extended version of an "Encyclopedia of Mathematical Physics" contribution hep-th/0502125.

  15. Multidimensional effects in nonadiabatic statistical theories of spin- forbidden kinetics. A case study of 3O + CO → CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasper, Ahren

    2015-04-14

    The appropriateness of treating crossing seams of electronic states of different spins as nonadiabatic transition states in statistical calculations of spin-forbidden reaction rates is considered. We show that the spin-forbidden reaction coordinate, the nuclear coordinate perpendicular to the crossing seam, is coupled to the remaining nuclear degrees of freedom. We found that this coupling gives rise to multidimensional effects that are not typically included in statistical treatments of spin-forbidden kinetics. Three qualitative categories of multidimensional effects may be identified: static multidimensional effects due to the geometry-dependence of the local shape of the crossing seam and of the spin–orbit coupling, dynamicalmore » multidimensional effects due to energy exchange with the reaction coordinate during the seam crossing, and nonlocal(history-dependent) multidimensional effects due to interference of the electronic variables at second, third, and later seam crossings. Nonlocal multidimensional effects are intimately related to electronic decoherence, where electronic dephasing acts to erase the history of the system. A semiclassical model based on short-time full-dimensional trajectories that includes all three multidimensional effects as well as a model for electronic decoherence is presented. The results of this multidimensional nonadiabatic statistical theory (MNST) for the 3O + CO → CO 2 reaction are compared with the results of statistical theories employing one-dimensional (Landau–Zener and weak coupling) models for the transition probability and with those calculated previously using multistate trajectories. The MNST method is shown to accurately reproduce the multistate decay-of-mixing trajectory results, so long as consistent thresholds are used. Furthermore, the MNST approach has several advantages over multistate trajectory approaches and is more suitable in chemical kinetics calculations at low temperatures and for complex systems. The error in statistical calculations that neglect multidimensional effects is shown to be as large as a factor of 2 for this system, with static multidimensional effects identified as the largest source of error.« less

  16. Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications

    NASA Astrophysics Data System (ADS)

    Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka

    2017-07-01

    This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.

  17. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  18. A statistical mechanical theory for a two-dimensional model of water

    PubMed Central

    Urbic, Tomaz; Dill, Ken A.

    2010-01-01

    We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the “Mercedes-Benz” (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water’s heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water’s large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state. PMID:20550408

  19. A statistical mechanical theory for a two-dimensional model of water

    NASA Astrophysics Data System (ADS)

    Urbic, Tomaz; Dill, Ken A.

    2010-06-01

    We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the "Mercedes-Benz" (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water's heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water's large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state.

  20. A statistical mechanical theory for a two-dimensional model of water.

    PubMed

    Urbic, Tomaz; Dill, Ken A

    2010-06-14

    We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the "Mercedes-Benz" (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water's heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water's large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state.

  1. Robust hypothesis tests for detecting statistical evidence of two-dimensional and three-dimensional interactions in single-molecule measurements

    NASA Astrophysics Data System (ADS)

    Calderon, Christopher P.; Weiss, Lucien E.; Moerner, W. E.

    2014-05-01

    Experimental advances have improved the two- (2D) and three-dimensional (3D) spatial resolution that can be extracted from in vivo single-molecule measurements. This enables researchers to quantitatively infer the magnitude and directionality of forces experienced by biomolecules in their native environment. Situations where such force information is relevant range from mitosis to directed transport of protein cargo along cytoskeletal structures. Models commonly applied to quantify single-molecule dynamics assume that effective forces and velocity in the x ,y (or x ,y,z) directions are statistically independent, but this assumption is physically unrealistic in many situations. We present a hypothesis testing approach capable of determining if there is evidence of statistical dependence between positional coordinates in experimentally measured trajectories; if the hypothesis of independence between spatial coordinates is rejected, then a new model accounting for 2D (3D) interactions can and should be considered. Our hypothesis testing technique is robust, meaning it can detect interactions, even if the noise statistics are not well captured by the model. The approach is demonstrated on control simulations and on experimental data (directed transport of intraflagellar transport protein 88 homolog in the primary cilium).

  2. Bayesian analysis of spatially-dependent functional responses with spatially-dependent multi-dimensional functional predictors

    USDA-ARS?s Scientific Manuscript database

    Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...

  3. Local Geostatistical Models and Big Data in Hydrological and Ecological Applications

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2015-04-01

    The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property helps to overcome a significant computational bottleneck of geostatistical models due to the poor scaling of the matrix inversion [4,5]. We present applications to real and simulated data sets, including the Walker lake data, and we investigate the SLI performance using various statistical cross validation measures. References [1] T. Hofmann, B. Schlkopf, A.J. Smola, Annals of Statistics, 36, 1171-1220 (2008). [2] D. T. Hristopulos, SIAM Journal on Scientific Computing, 24(6): 2125-2162 (2003). [3] D. T. Hristopulos and S. N. Elogne, IEEE Transactions on Signal Processing, 57(9): 3475-3487 (2009) [4] G. Jona Lasinio, G. Mastrantonio, and A. Pollice, Statistical Methods and Applications, 22(1):97-112 (2013) [5] Sun, Y., B. Li, and M. G. Genton (2012). Geostatistics for large datasets. In: Advances and Challenges in Space-time Modelling of Natural Events, Lecture Notes in Statistics, pp. 55-77. Springer, Berlin-Heidelberg.

  4. EM in high-dimensional spaces.

    PubMed

    Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim

    2005-06-01

    This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.

  5. Correcting for population structure and kinship using the linear mixed model: theory and extensions.

    PubMed

    Hoffman, Gabriel E

    2013-01-01

    Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.

  6. Prediction of the low-velocity distribution from the pore structure in simple porous media

    NASA Astrophysics Data System (ADS)

    de Anna, Pietro; Quaife, Bryan; Biros, George; Juanes, Ruben

    2017-12-01

    The macroscopic properties of fluid flow and transport through porous media are a direct consequence of the underlying pore structure. However, precise relations that characterize flow and transport from the statistics of pore-scale disorder have remained elusive. Here we investigate the relationship between pore structure and the resulting fluid flow and asymptotic transport behavior in two-dimensional geometries of nonoverlapping circular posts. We derive an analytical relationship between the pore throat size distribution fλ˜λ-β and the distribution of the low fluid velocities fu˜u-β /2 , based on a conceptual model of porelets (the flow established within each pore throat, here a Hagen-Poiseuille flow). Our model allows us to make predictions, within a continuous-time random-walk framework, for the asymptotic statistics of the spreading of fluid particles along their own trajectories. These predictions are confirmed by high-fidelity simulations of Stokes flow and advective transport. The proposed framework can be extended to other configurations which can be represented as a collection of known flow distributions.

  7. Statistical field theory of futures commodity prices

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao

    2018-02-01

    The statistical theory of commodity prices has been formulated by Baaquie (2013). Further empirical studies of single (Baaquie et al., 2015) and multiple commodity prices (Baaquie et al., 2016) have provided strong evidence in support the primary assumptions of the statistical formulation. In this paper, the model for spot prices (Baaquie, 2013) is extended to model futures commodity prices using a statistical field theory of futures commodity prices. The futures prices are modeled as a two dimensional statistical field and a nonlinear Lagrangian is postulated. Empirical studies provide clear evidence in support of the model, with many nontrivial features of the model finding unexpected support from market data.

  8. A unifying perspective on personality pathology across the life span: Developmental considerations for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders

    PubMed Central

    TACKETT, JENNIFER L.; BALSIS, STEVE; OLTMANNS, THOMAS F.; KRUEGER, ROBERT F.

    2010-01-01

    Proposed changes in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) include replacing current personality disorder (PD) categories on Axis II with a taxonomy of dimensional maladaptive personality traits. Most of the work on dimensional models of personality pathology, and on personality disorders per se, has been conducted on young and middle-aged adult populations. Numerous questions remain regarding the applicability and limitations of applying various PD models to early and later life. In the present paper, we provide an overview of such dimensional models and review current proposals for conceptualizing PDs in DSM-V. Next, we extensively review existing evidence on the development, measurement, and manifestation of personality pathology in early and later life focusing on those issues deemed most relevant for informing DSM-V. Finally, we present overall conclusions regarding the need to incorporate developmental issues in conceptualizing PDs in DSM-V and highlight the advantages of a dimensional model in unifying PD perspectives across the life span. PMID:19583880

  9. Statistics of Smoothed Cosmic Fields in Perturbation Theory. I. Formulation and Useful Formulae in Second-Order Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Matsubara, Takahiko

    2003-02-01

    We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.

  10. Probabilistic Signal Recovery and Random Matrices

    DTIC Science & Technology

    2016-12-08

    applications in statistics , biomedical data analysis, quantization, dimen- sion reduction, and networks science. 1. High-dimensional inference and geometry Our...low-rank approxima- tion, with applications to community detection in networks, Annals of Statistics 44 (2016), 373–400. [7] C. Le, E. Levina, R...approximation, with applications to community detection in networks, Annals of Statistics 44 (2016), 373–400. C. Le, E. Levina, R. Vershynin, Concentration

  11. Low-dimensional representations of exact coherent states of the Navier-Stokes equations from the resolvent model of wall turbulence.

    PubMed

    Sharma, Ati S; Moarref, Rashad; McKeon, Beverley J; Park, Jae Sung; Graham, Michael D; Willis, Ashley P

    2016-02-01

    We report that many exact invariant solutions of the Navier-Stokes equations for both pipe and channel flows are well represented by just a few modes of the model of McKeon and Sharma [J. Fluid Mech. 658, 336 (2010)]. This model provides modes that act as a basis to decompose the velocity field, ordered by their amplitude of response to forcing arising from the interaction between scales. The model was originally derived from the Navier-Stokes equations to represent turbulent flows and has been used to explain coherent structure and to predict turbulent statistics. This establishes a surprising new link between the two distinct approaches to understanding turbulence.

  12. Low-dimensional representations of exact coherent states of the Navier-Stokes equations from the resolvent model of wall turbulence

    NASA Astrophysics Data System (ADS)

    Sharma, Ati S.; Moarref, Rashad; McKeon, Beverley J.; Park, Jae Sung; Graham, Michael D.; Willis, Ashley P.

    2016-02-01

    We report that many exact invariant solutions of the Navier-Stokes equations for both pipe and channel flows are well represented by just a few modes of the model of McKeon and Sharma [J. Fluid Mech. 658, 336 (2010), 10.1017/S002211201000176X]. This model provides modes that act as a basis to decompose the velocity field, ordered by their amplitude of response to forcing arising from the interaction between scales. The model was originally derived from the Navier-Stokes equations to represent turbulent flows and has been used to explain coherent structure and to predict turbulent statistics. This establishes a surprising new link between the two distinct approaches to understanding turbulence.

  13. Three Dimensional Object Recognition Using an Unsupervised Neural Network: Understanding the Distinguishing Features

    DTIC Science & Technology

    1992-12-23

    predominance of structural models of recognition, of which a recent example is the Recognition By Components (RBC) theory ( Biederman , 1987 ). Structural...related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived from a biologically motivated computational theory (Bienenstock et...dimensional object recognition (Intrator and Gold, 1991). The method is related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived

  14. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  16. Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.

    PubMed

    Venturi, D; Karniadakis, G E

    2014-06-08

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.

  17. Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems

    PubMed Central

    Venturi, D.; Karniadakis, G. E.

    2014-01-01

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519

  18. Evaluating statistical cloud schemes: What can we gain from ground-based remote sensing?

    NASA Astrophysics Data System (ADS)

    Grützun, V.; Quaas, J.; Morcrette, C. J.; Ament, F.

    2013-09-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based remote sensing such as lidar, microwave, and radar to evaluate prognostic distribution moments using the "perfect model approach." This means that we employ a high-resolution weather model as virtual reality and retrieve full three-dimensional atmospheric quantities and virtual ground-based observations. We then use statistics from the virtual observation to validate the modeled 3-D statistics. Since the data are entirely consistent, any discrepancy occurring is due to the method. Focusing on total water mixing ratio, we find that the mean ratio can be evaluated decently but that it strongly depends on the meteorological conditions as to whether the variance and skewness are reliable. Using some simple schematic description of different synoptic conditions, we show how statistics obtained from point or line measurements can be poor at representing the full three-dimensional distribution of water in the atmosphere. We argue that a careful analysis of measurement data and detailed knowledge of the meteorological situation is necessary to judge whether we can use the data for an evaluation of higher moments of the humidity distribution used by a statistical cloud scheme.

  19. Self-organization of cosmic radiation pressure instability. II - One-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Hogan, Craig J.; Woods, Jorden

    1992-01-01

    The clustering of statistically uniform discrete absorbing particles moving solely under the influence of radiation pressure from uniformly distributed emitters is studied in a simple one-dimensional model. Radiation pressure tends to amplify statistical clustering in the absorbers; the absorbing material is swept into empty bubbles, the biggest bubbles grow bigger almost as they would in a uniform medium, and the smaller ones get crushed and disappear. Numerical simulations of a one-dimensional system are used to support the conjecture that the system is self-organizing. Simple statistics indicate that a wide range of initial conditions produce structure approaching the same self-similar statistical distribution, whose scaling properties follow those of the attractor solution for an isolated bubble. The importance of the process for large-scale structuring of the interstellar medium is briefly discussed.

  20. Quantitative analysis of fetal facial morphology using 3D ultrasound and statistical shape modeling: a feasibility study.

    PubMed

    Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C

    2017-07-01

    The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4  weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Dynamo transition in low-dimensional models.

    PubMed

    Verma, Mahendra K; Lessinnes, Thomas; Carati, Daniele; Sarris, Ioannis; Kumar, Krishna; Singh, Meenakshi

    2008-09-01

    Two low-dimensional magnetohydrodynamic models containing three velocity and three magnetic modes are described. One of them (nonhelical model) has zero kinetic and current helicity, while the other model (helical) has nonzero kinetic and current helicity. The velocity modes are forced in both these models. These low-dimensional models exhibit a dynamo transition at a critical forcing amplitude that depends on the Prandtl number. In the nonhelical model, dynamo exists only for magnetic Prandtl number beyond 1, while the helical model exhibits dynamo for all magnetic Prandtl number. Although the model is far from reproducing all the possible features of dynamo mechanisms, its simplicity allows a very detailed study and the observed dynamo transition is shown to bear similarities with recent numerical and experimental results.

  2. Entropic multirelaxation lattice Boltzmann models for turbulent flows

    NASA Astrophysics Data System (ADS)

    Bösch, Fabian; Chikatamarla, Shyam S.; Karlin, Ilya V.

    2015-10-01

    We present three-dimensional realizations of a class of lattice Boltzmann models introduced recently by the authors [I. V. Karlin, F. Bösch, and S. S. Chikatamarla, Phys. Rev. E 90, 031302(R) (2014), 10.1103/PhysRevE.90.031302] and review the role of the entropic stabilizer. Both coarse- and fine-grid simulations are addressed for the Kida vortex flow benchmark. We show that the outstanding numerical stability and performance is independent of a particular choice of the moment representation for high-Reynolds-number flows. We report accurate results for low-order moments for homogeneous isotropic decaying turbulence and second-order grid convergence for most assessed statistical quantities. It is demonstrated that all the three-dimensional lattice Boltzmann realizations considered herein converge to the familiar lattice Bhatnagar-Gross-Krook model when the resolution is increased. Moreover, thanks to the dynamic nature of the entropic stabilizer, the present model features less compressibility effects and maintains correct energy and enstrophy dissipation. The explicit and efficient nature of the present lattice Boltzmann method renders it a promising candidate for both engineering and scientific purposes for highly turbulent flows.

  3. One-dimensional turbulence modeling of a turbulent counterflow flame with comparison to DNS

    DOE PAGES

    Jozefik, Zoltan; Kerstein, Alan R.; Schmidt, Heiko; ...

    2015-06-01

    The one-dimensional turbulence (ODT) model is applied to a reactant-to-product counterflow configuration and results are compared with DNS data. The model employed herein solves conservation equations for momentum, energy, and species on a one dimensional (1D) domain corresponding to the line spanning the domain between nozzle orifice centers. The effects of turbulent mixing are modeled via a stochastic process, while the Kolmogorov and reactive length and time scales are explicitly resolved and a detailed chemical kinetic mechanism is used. Comparisons between model and DNS results for spatial mean and root-mean-square (RMS) velocity, temperature, and major and minor species profiles aremore » shown. The ODT approach shows qualitatively and quantitatively reasonable agreement with the DNS data. Scatter plots and statistics conditioned on temperature are also compared for heat release rate and all species. ODT is able to capture the range of results depicted by DNS. As a result, conditional statistics show signs of underignition.« less

  4. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  5. Statistical mechanics of two-dimensional shuffled foams: prediction of the correlation between geometry and topology.

    PubMed

    Durand, Marc; Käfer, Jos; Quilliet, Catherine; Cox, Simon; Talebi, Shirin Ataei; Graner, François

    2011-10-14

    We propose an analytical model for the statistical mechanics of shuffled two-dimensional foams with moderate bubble size polydispersity. It predicts without any adjustable parameters the correlations between the number of sides n of the bubbles (topology) and their areas A (geometry) observed in experiments and numerical simulations of shuffled foams. Detailed statistics show that in shuffled cellular patterns n correlates better with √A (as claimed by Desch and Feltham) than with A (as claimed by Lewis and widely assumed in the literature). At the level of the whole foam, standard deviations Δn and ΔA are in proportion. Possible applications include correlations of the detailed distributions of n and A, three-dimensional foams, and biological tissues.

  6. Initial Systematic Investigations of the Weakly Coupled Free Fermionic Heterotic String Landscape Statistics

    NASA Astrophysics Data System (ADS)

    Renner, Timothy

    2011-12-01

    A C++ framework was constructed with the explicit purpose of systematically generating string models using the Weakly Coupled Free Fermionic Heterotic String (WCFFHS) method. The software, optimized for speed, generality, and ease of use, has been used to conduct preliminary systematic investigations of WCFFHS vacua. Documentation for this framework is provided in the Appendix. After an introduction to theoretical and computational aspects of WCFFHS model building, a study of ten-dimensional WCFFHS models is presented. Degeneracies among equivalent expressions of each of the known models are investigated and classified. A study of more phenomenologically realistic four-dimensional models based on the well known "NAHE" set is then presented, with statistics being reported on gauge content, matter representations, and space-time supersymmetries. The final study is a parallel to the NAHE study in which a variation of the NAHE set is systematically extended and examined statistically. Special attention is paid to models with "mirroring"---identical observable and hidden sector gauge groups and matter representations.

  7. Statistical mechanics of a single particle in a multiscale random potential: Parisi landscapes in finite-dimensional Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Bouchaud, Jean-Philippe

    2008-08-01

    We construct an N-dimensional Gaussian landscape with multiscale, translation invariant, logarithmic correlations and investigate the statistical mechanics of a single particle in this environment. In the limit of high dimension N → ∞ the free energy of the system and overlap function are calculated exactly using the replica trick and Parisi's hierarchical ansatz. In the thermodynamic limit, we recover the most general version of the Derrida's generalized random energy model (GREM). The low-temperature behaviour depends essentially on the spectrum of length scales involved in the construction of the landscape. If the latter consists of K discrete values, the system is characterized by a K-step replica symmetry breaking solution. We argue that our construction is in fact valid in any finite spatial dimensions N >= 1. We discuss the implications of our results for the singularity spectrum describing multifractality of the associated Boltzmann-Gibbs measure. Finally we discuss several generalizations and open problems, such as the dynamics in such a landscape and the construction of a generalized multifractal random walk.

  8. Statistics for characterizing data on the periphery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, James P; Hush, Donald R

    2010-01-01

    We introduce a class of statistics for characterizing the periphery of a distribution, and show that these statistics are particularly valuable for problems in target detection. Because so many detection algorithms are rooted in Gaussian statistics, we concentrate on ellipsoidal models of high-dimensional data distributions (that is to say: covariance matrices), but we recommend several alternatives to the sample covariance matrix that more efficiently model the periphery of a distribution, and can more effectively detect anomalous data samples.

  9. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  10. LES-based filter-matrix lattice Boltzmann model for simulating fully developed turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Zhuo, Congshan; Zhong, Chengwen

    2016-11-01

    In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.

  11. Evaluation of Deep Learning Representations of Spatial Storm Data

    NASA Astrophysics Data System (ADS)

    Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.

    2017-12-01

    The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being performed to determine how the choice of input variables affects the results.

  12. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    DOE PAGES

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...

    2017-10-10

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less

  13. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less

  14. Markov-switching multifractal models as another class of random-energy-like models in one-dimensional space

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2012-03-01

    We map the Markov-switching multifractal model (MSM) onto the random energy model (REM). The MSM is, like the REM, an exactly solvable model in one-dimensional space with nontrivial correlation functions. According to our results, four different statistical physics phases are possible in random walks with multifractal behavior. We also introduce the continuous branching version of the model, calculate the moments, and prove multiscaling behavior. Different phases have different multiscaling properties.

  15. Separation of the atmospheric variability into non-Gaussian multidimensional sources by projection pursuit techniques

    NASA Astrophysics Data System (ADS)

    Pires, Carlos A. L.; Ribeiro, Andreia F. S.

    2017-02-01

    We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.

  16. Teaching Classical Statistical Mechanics: A Simulation Approach.

    ERIC Educational Resources Information Center

    Sauer, G.

    1981-01-01

    Describes a one-dimensional model for an ideal gas to study development of disordered motion in Newtonian mechanics. A Monte Carlo procedure for simulation of the statistical ensemble of an ideal gas with fixed total energy is developed. Compares both approaches for a pseudoexperimental foundation of statistical mechanics. (Author/JN)

  17. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  18. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    PubMed

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  19. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  20. Spectra, current flow, and wave-function morphology in a model PT -symmetric quantum dot with external interactions

    NASA Astrophysics Data System (ADS)

    Tellander, Felix; Berggren, Karl-Fredrik

    2017-04-01

    In this paper we use numerical simulations to study a two-dimensional (2D) quantum dot (cavity) with two leads for passing currents (electrons, photons, etc.) through the system. By introducing an imaginary potential in each lead the system is made symmetric under parity-time inversion (PT symmetric). This system is experimentally realizable in the form of, e.g., quantum dots in low-dimensional semiconductors, optical and electromagnetic cavities, and other classical wave analogs. The computational model introduced here for studying spectra, exceptional points (EPs), wave-function symmetries and morphology, and current flow includes thousands of interacting states. This supplements previous analytic studies of few interacting states by providing more detail and higher resolution. The Hamiltonian describing the system is non-Hermitian; thus, the eigenvalues are, in general, complex. The structure of the wave functions and probability current densities are studied in detail at and in between EPs. The statistics for EPs is evaluated, and reasons for a gradual dynamical crossover are identified.

  1. Spherical-shell boundaries for two-dimensional compressible convection in a star

    NASA Astrophysics Data System (ADS)

    Pratt, J.; Baraffe, I.; Goffrey, T.; Geroux, C.; Viallet, M.; Folini, D.; Constantino, T.; Popov, M.; Walder, R.

    2016-10-01

    Context. Studies of stellar convection typically use a spherical-shell geometry. The radial extent of the shell and the boundary conditions applied are based on the model of the star investigated. We study the impact of different two-dimensional spherical shells on compressible convection. Realistic profiles for density and temperature from an established one-dimensional stellar evolution code are used to produce a model of a large stellar convection zone representative of a young low-mass star, like our sun at 106 years of age. Aims: We analyze how the radial extent of the spherical shell changes the convective dynamics that result in the deep interior of the young sun model, far from the surface. In the near-surface layers, simple small-scale convection develops from the profiles of temperature and density. A central radiative zone below the convection zone provides a lower boundary on the convection zone. The inclusion of either of these physically distinct layers in the spherical shell can potentially affect the characteristics of deep convection. Methods: We perform hydrodynamic implicit large eddy simulations of compressible convection using the MUltidimensional Stellar Implicit Code (MUSIC). Because MUSIC has been designed to use realistic stellar models produced from one-dimensional stellar evolution calculations, MUSIC simulations are capable of seamlessly modeling a whole star. Simulations in two-dimensional spherical shells that have different radial extents are performed over tens or even hundreds of convective turnover times, permitting the collection of well-converged statistics. Results: To measure the impact of the spherical-shell geometry and our treatment of boundaries, we evaluate basic statistics of the convective turnover time, the convective velocity, and the overshooting layer. These quantities are selected for their relevance to one-dimensional stellar evolution calculations, so that our results are focused toward studies exploiting the so-called 321D link. We find that the inclusion in the spherical shell of the boundary between the radiative and convection zones decreases the amplitude of convective velocities in the convection zone. The inclusion of near-surface layers in the spherical shell can increase the amplitude of convective velocities, although the radial structure of the velocity profile established by deep convection is unchanged. The impact of including the near-surface layers depends on the speed and structure of small-scale convection in the near-surface layers. Larger convective velocities in the convection zone result in a commensurate increase in the overshooting layer width and a decrease in the convective turnover time. These results provide support for non-local aspects of convection.

  2. Alignment dynamics of diffusive scalar gradient in a two-dimensional model flow

    NASA Astrophysics Data System (ADS)

    Gonzalez, M.

    2018-04-01

    The Lagrangian two-dimensional approach of scalar gradient kinematics is revisited accounting for molecular diffusion. Numerical simulations are performed in an analytic, parameterized model flow, which enables considering different regimes of scalar gradient dynamics. Attention is especially focused on the influence of molecular diffusion on Lagrangian statistical orientations and on the dynamics of scalar gradient alignment.

  3. Upon Generating Discrete Expanding Integrable Models of the Toda Lattice Systems and Infinite Conservation Laws

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Zhang, Xiangzhi; Wang, Yan; Liu, Jiangen

    2017-01-01

    With the help of R-matrix approach, we present the Toda lattice systems that have extensive applications in statistical physics and quantum physics. By constructing a new discrete integrable formula by R-matrix, the discrete expanding integrable models of the Toda lattice systems and their Lax pairs are generated, respectively. By following the constructing formula again, we obtain the corresponding (2+1)-dimensional Toda lattice systems and their Lax pairs, as well as their (2+1)-dimensional discrete expanding integrable models. Finally, some conservation laws of a (1+1)-dimensional generalised Toda lattice system and a new (2+1)-dimensional lattice system are generated, respectively.

  4. Loopless nontrapping invasion-percolation model for fracking.

    PubMed

    Norris, J Quinn; Turcotte, Donald L; Rundle, John B

    2014-02-01

    Recent developments in hydraulic fracturing (fracking) have enabled the recovery of large quantities of natural gas and oil from old, low-permeability shales. These developments include a change from low-volume, high-viscosity fluid injection to high-volume, low-viscosity injection. The injected fluid introduces distributed damage that provides fracture permeability for the extraction of the gas and oil. In order to model this process, we utilize a loopless nontrapping invasion percolation previously introduced to model optimal polymers in a strongly disordered medium and for determining minimum energy spanning trees on a lattice. We performed numerical simulations on a two-dimensional square lattice and find significant differences from other percolation models. Additionally, we find that the growing fracture network satisfies both Horton-Strahler and Tokunaga network statistics. As with other invasion percolation models, our model displays burst dynamics, in which the cluster extends rapidly into a connected region. We introduce an alternative definition of bursts to be a consecutive series of opened bonds whose strengths are all below a specified value. Using this definition of bursts, we find good agreement with a power-law frequency-area distribution. These results are generally consistent with the observed distribution of microseismicity observed during a high-volume frack.

  5. Dimensionality reduction in epidemic spreading models

    NASA Astrophysics Data System (ADS)

    Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.

    2015-09-01

    Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.

  6. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  7. Attentional Bias in Human Category Learning: The Case of Deep Learning.

    PubMed

    Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José

    2018-01-01

    Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.

  8. Metal-superconductor transition in low-dimensional superconducting clusters embedded in two-dimensional electron systems

    NASA Astrophysics Data System (ADS)

    Bucheli, D.; Caprara, S.; Castellani, C.; Grilli, M.

    2013-02-01

    Motivated by recent experimental data on thin film superconductors and oxide interfaces, we propose a random-resistor network apt to describe the occurrence of a metal-superconductor transition in a two-dimensional electron system with disorder on the mesoscopic scale. We consider low-dimensional (e.g. filamentary) structures of a superconducting cluster embedded in the two-dimensional network and we explore the separate effects and the interplay of the superconducting structure and of the statistical distribution of local critical temperatures. The thermal evolution of the resistivity is determined by a numerical calculation of the random-resistor network and, for comparison, a mean-field approach called effective medium theory (EMT). Our calculations reveal the relevance of the distribution of critical temperatures for clusters with low connectivity. In addition, we show that the presence of spatial correlations requires a modification of standard EMT to give qualitative agreement with the numerical results. Applying the present approach to an LaTiO3/SrTiO3 oxide interface, we find that the measured resistivity curves are compatible with a network of spatially dense but loosely connected superconducting islands.

  9. Two-dimensional signal processing with application to image restoration

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1974-01-01

    A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.

  10. Recent statistical methods for orientation data

    NASA Technical Reports Server (NTRS)

    Batschelet, E.

    1972-01-01

    The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.

  11. Influence of the medium's dimensionality on defect-mediated turbulence.

    PubMed

    St-Yves, Ghislain; Davidsen, Jörn

    2015-03-01

    Spatiotemporal chaos in oscillatory and excitable media is often characterized by the presence of phase singularities called defects. Understanding such defect-mediated turbulence and its dependence on the dimensionality of a given system is an important challenge in nonlinear dynamics. This is especially true in the context of ventricular fibrillation in the heart, where the importance of the thickness of the ventricular wall is contentious. Here, we study defect-mediated turbulence arising in two different regimes in a conceptual model of excitable media and investigate how the statistical character of the turbulence changes if the thickness of the medium is changed from (quasi-) two- dimensional to three dimensional. We find that the thickness of the medium does not have a significant influence in, far from onset, fully developed turbulence while there is a clear transition if the system is close to a spiral instability. We provide clear evidence that the observed transition and change in the mechanism that drives the turbulent behavior is purely a consequence of the dimensionality of the medium. Using filament tracking, we further show that the statistical properties in the three-dimensional medium are different from those in turbulent regimes arising from filament instabilities like the negative line tension instability. Simulations also show that the presence of this unique three-dimensional turbulent dynamics is not model specific.

  12. From Airborne EM to Geology, some examples

    NASA Astrophysics Data System (ADS)

    Gunnink, Jan

    2014-05-01

    Introduction Airborne Electro Magnetics (AEM) provide a model of the 3-dimensional distribution of resistivity of the subsurface. These resistivity models were used for delineating geological structures (e.g. Buried Valleys and salt domes) and for geohydrological modeling of aquifers (sandy sediments) and aquitards (clayey sediments). Most of the interpretation of the AEM has been carried out manually, by interpretation of 2 and 3-dimensional resistivity models into geological units by a skilled geologists / geophysicist. The manual interpretation is tiresome, takes a long time and is prone to subjective choices of the interpreter. Therefore, semi-automatic interpretation of AEM resistivity models into geological units is a recent research topic. Two examples are presented that show how resistivity, as obtained from AEM, can be "converted" to useful geological / geohydrolocal models. Statistical relation between borehole data and resistivity In the northeastern part of the Netherlands, the 3D distribution of clay deposits - formed in a glacio-lacustrine environment with buried glacial valleys - was modelled. Boreholes with description of lithology, were linked to AEM resistivity. First, 1D AEM resistivity models from each individual sounding were interpolated to cover the entire study area, resulting in a 3-dimensional model of resistivity. For each interval of clay and sand in the boreholes, the corresponding resistivity was extracted from the 3D resistivity model. Linear regression was used to link the clay and non-clay proportion in each borehole interval to the Ln(resistivity). This regression is then used to "convert" the 3D resistivity model into proportion of clay for the entire study area. This so-called "soft information" is combined with the "hard data" (boreholes) to model the proportion of clay for the entire study area using geostatistical simulation techniques (Sequential Indicator Simulation with collocated co-kriging). 100 realizations of the 3-dimensional distribution of clay and sand were calculated giving an appreciation of the variability of the 3-dimensional distribution of clay and sand. Each realization was input into a groundwatermodel to assess the protection the of the clay against pollution from the surface. Artificial Neural Networks AEM resistivity models in an area in Northern part of the Netherlands were interpreted by Artificial Neural Networks (ANN) to obtain a 3-dimensional model of a glacial till deposit that is important in geohydrological modeling. The groundwater in the study area was brackish to saline, causing the AEM resistivity model to be dominated by the low resistivity of the groundwater. After conducting Electrical Cone Penetration Tests (ECPTs) it became clear that the glacial till showed a distinct, non-linear, pattern of resistivity, that was discriminating it from the surrounding sediments. The patterns, found in the ECPTs were used to train an ANN and was consequently applied to the resistivity model that was derived from the AEM. The result was a 3-dimensional model of the probability of having the glacial till, which was checked against boreholes and proved to be quite reasonable. Conclusion Resistivity derived from AEM can be linked to geological features in a number of ways. Besides manual interpretation, statistical techniques are used, either in the form of regression or by means of Neural Networks, to extract geological and geohydrological meaningful interpretations from the resistivity model.

  13. Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.

    In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientificmore » questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.« less

  14. Three-dimensional head anthropometric analysis

    NASA Astrophysics Data System (ADS)

    Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James

    2003-05-01

    Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).

  15. Validation of cone beam computed tomography-based tooth printing using different three-dimensional printing technologies.

    PubMed

    Khalil, Wael; EzEldeen, Mostafa; Van De Casteele, Elke; Shaheen, Eman; Sun, Yi; Shahbazian, Maryam; Olszewski, Raphael; Politis, Constantinus; Jacobs, Reinhilde

    2016-03-01

    Our aim was to determine the accuracy of 3-dimensional reconstructed models of teeth compared with the natural teeth by using 4 different 3-dimensional printers. This in vitro study was carried out using 2 intact, dry adult human mandibles, which were scanned with cone beam computed tomography. Premolars were selected for this study. Dimensional differences between natural teeth and the printed models were evaluated directly by using volumetric differences and indirectly through optical scanning. Analysis of variance, Pearson correlation, and Bland Altman plots were applied for statistical analysis. Volumetric measurements from natural teeth and fabricated models, either by the direct method (the Archimedes principle) or by the indirect method (optical scanning), showed no statistical differences. The mean volume difference ranged between 3.1 mm(3) (0.7%) and 4.4 mm(3) (1.9%) for the direct measurement, and between -1.3 mm(3) (-0.6%) and 11.9 mm(3) (+5.9%) for the optical scan. A surface part comparison analysis showed that 90% of the values revealed a distance deviation within the interval 0 to 0.25 mm. Current results showed a high accuracy of all printed models of teeth compared with natural teeth. This outcome opens perspectives for clinical use of cost-effective 3-dimensional printed teeth for surgical procedures, such as tooth autotransplantation. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  17. Statistical discrimination of footwear: a method for the comparison of accidentals on shoe outsoles inspired by facial recognition techniques.

    PubMed

    Petraco, Nicholas D K; Gambino, Carol; Kubic, Thomas A; Olivio, Dayhana; Petraco, Nicholas

    2010-01-01

    In the field of forensic footwear examination, it is a widely held belief that patterns of accidental marks found on footwear and footwear impressions possess a high degree of "uniqueness." This belief, however, has not been thoroughly studied in a numerical way using controlled experiments. As a result, this form of valuable physical evidence has been the subject of admissibility challenges. In this study, we apply statistical techniques used in facial pattern recognition, to a minimal set of information gleaned from accidental patterns. That is, in order to maximize the amount of potential similarity between patterns, we only use the coordinate locations of accidental marks (on the top portion of a footwear impression) to characterize the entire pattern. This allows us to numerically gauge how similar two patterns are to one another in a worst-case scenario, i.e., in the absence of a tremendous amount of information normally available to the footwear examiner such as accidental mark size and shape. The patterns were recorded from the top portion of the shoe soles (i.e., not the heel) of five shoe pairs. All shoes were the same make and model and all were worn by the same person for a period of 30 days. We found that in 20-30 dimensional principal component (PC) space (99.5% variance retained), patterns from the same shoe, even at different points in time, tended to cluster closer to each other than patterns from different shoes. Correct shoe identification rates using maximum likelihood linear classification analysis and the hold-one-out procedure ranged from 81% to 100%. Although low in variance, three-dimensional PC plots were made and generally corroborated the findings in the much higher dimensional PC-space. This study is intended to be a starting point for future research to build statistical models on the formation and evolution of accidental patterns.

  18. Liquid-liquid critical point in a simple analytical model of water.

    PubMed

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  19. Liquid-liquid critical point in a simple analytical model of water

    NASA Astrophysics Data System (ADS)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  20. Statistical and dynamical remastering of classic exoplanet systems

    NASA Astrophysics Data System (ADS)

    Nelson, Benjamin Earl

    The most powerful constraints on planet formation will come from characterizing the dynamical state of complex multi-planet systems. Unfortunately, with that complexity comes a number of factors that make analyzing these systems a computationally challenging endeavor: the sheer number of model parameters, a wonky shaped posterior distribution, and hundreds to thousands of time series measurements. In this dissertation, I will review our efforts to improve the statistical analyses of radial velocity (RV) data and their applications to some renown, dynamically complex exoplanet system. In the first project (Chapters 2 and 4), we develop a differential evolution Markov chain Monte Carlo (RUN DMC) algorithm to tackle the aforementioned difficult aspects of data analysis. We test the robustness of the algorithm in regards to the number of modeled planets (model dimensionality) and increasing dynamical strength. We apply RUN DMC to a couple classic multi-planet systems and one highly debated system from radial velocity surveys. In the second project (Chapter 5), we analyze RV data of 55 Cancri, a wide binary system known to harbor five planetary orbiting the primary. We find the inner-most planet "e" must be coplanar to within 40 degrees of the outer planets, otherwise Kozai-like perturbations will cause the planet to enter the stellar photosphere through its periastron passage. We find the orbits of planets "b" and "c" are apsidally aligned and librating with low to median amplitude (50+/-6 10 degrees), but they are not orbiting in a mean-motion resonance. In the third project (Chapters 3, 4, 6), we analyze RV data of Gliese 876, a four planet system with three participating in a multi-body resonance, i.e. a Laplace resonance. From a combined observational and statistical analysis computing Bayes factors, we find a four-planet model is favored over one with three-planets. Conditioned on this preferred model, we meaningfully constrain the three-dimensional orbital architecture of all the planets orbiting Gliese 876 based on the radial velocity data alone. By demanding orbital stability, we find the resonant planets have low mutual inclinations phi so they must be roughly coplanar (phicb = 1.41(+/-0.62/0.57) degrees and phibe = 3.87(+/-1.99/1.86 degrees). The three-dimensional Laplace argument librates chaotically with an amplitude of 50.5(+/-7.9/10.0) degrees, indicating significant past disk migration and ensuring long-term stability. In the final project (Chapter 7), we analyze the RV data for nu Octantis, a closely separated binary with an alleged planet orbiting interior and retrograde to the binary. Preliminary results place very tight constraints on the planet-binary mutual inclination but no model is dynamically stable beyond 105 years. These empirically derived models motivate the need for more sophisticated algorithms to analyze exoplanet data and will provide new challenges for planet formation models.

  1. Personality disorders and the DSM-5: Scientific and extra-scientific factors in the maintenance of the status quo.

    PubMed

    Gøtzsche-Astrup, Oluf; Moskowitz, Andrew

    2016-02-01

    The aim of this study was to review and discuss the evidence for dimensional classification of personality disorders and the historical and sociological bases of psychiatric nosology and research. Categorical and dimensional conceptualisations of personality disorder are reviewed, with a focus on the Diagnostic and Statistical Manual of Mental Disorders-system's categorisation and the Five-Factor Model of personality. This frames the events leading up to the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, personality disorder debacle, where the implementation of a hybrid model was blocked in a last-minute intervention by the American Psychiatric Association Board of Trustees. Explanations for these events are discussed, including the existence of invisible colleges of researchers and the fear of risking a 'scientific revolution' in psychiatry. A failure to recognise extra-scientific factors at work in classification of mental illness can have a profound and long-lasting influence on psychiatric nosology. In the end it was not scientific factors that led to the failure of the hybrid model of personality disorders, but opposing forces within the mental health community in general and the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, Task Force in particular. Substantial evidence has accrued over the past decades in support of a dimensional model of personality disorders. The events surrounding the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, Personality and Personality Disorders Work Group show the difficulties in reconciling two different worldviews with a hybrid model. They also indicate the future of a psychiatric nosology that will be increasingly concerned with dimensional classification of mental illness. As such, the road is paved for more substantial changes to personality disorder classification in the International Classification of Diseases, 11th Revision, in 2017. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  2. Medial Longitudinal Arch Angle Presents Significant Differences Between Foot Types: A Biplane Fluoroscopy Study.

    PubMed

    Balsdon, Megan E R; Bushey, Kristen M; Dombroski, Colin E; LeBel, Marie-Eve; Jenkyn, Thomas R

    2016-10-01

    The structure of the medial longitudinal arch (MLA) affects the foot's overall function and its ability to dissipate plantar pressure forces. Previous research on the MLA includes measuring the calcaneal-first metatarsal angle using a static sagittal plane radiograph, a dynamic height-to-length ratio using marker clusters with a multisegment foot model, and a contained angle using single point markers with a multisegment foot model. The objective of this study was to use biplane fluoroscopy to measure a contained MLA angle between foot types: pes planus (low arch), pes cavus (high arch), and normal arch. Fifteen participants completed the study, five from each foot type. Markerless fluoroscopic radiostereometric analysis (fRSA) was used with a three-dimensional model of the foot bones and manually matching those bones to a pair of two-dimensional radiographic images during midstance of gait. Statistically significant differences were found between barefoot arch angles of the normal and pes cavus foot types (p = 0.036), as well as between the pes cavus and pes planus foot types (p = 0.004). Dynamic walking also resulted in a statistically significant finding compared to the static standing trials (p = 0.014). These results support the classification of individuals following a physical assessment by a foot specialist for those with pes cavus and planus foot types. The differences between static and dynamic kinematic measurements were also supported using this novel method.

  3. On the application of quantum transport theory to electron sources.

    PubMed

    Jensen, Kevin L

    2003-01-01

    Electron sources (e.g., field emitter arrays, wide band-gap (WBG) semiconductor materials and coatings, carbon nanotubes, etc.) seek to exploit ballistic transport within the vacuum after emission from microfabricated structures. Regardless of kind, all sources strive to minimize the barrier to electron emission by engineering material properties (work function/electron affinity) or physical geometry (field enhancement) of the cathode. The unique capabilities of cold cathodes, such as instant ON/OFF performance, high brightness, high current density, large transconductance to capacitance ratio, cold emission, small size and/or low voltage operation characteristics, commend their use in several advanced devices when physical size, weight, power consumption, beam current, and pulse repletion frequency are important, e.g., RF power amplifier such as traveling wave tubes (TWTs) for radar and communications, electrodynamic tethers for satellite deboost/reboost, and electric propulsion systems such as Hall thrusters for small satellites. The theoretical program described herein is directed towards models to evaluate emission current from electron sources (in particular, emission from WBG and Spindt-type field emitter) in order to assess their utility, capabilities and performance characteristics. Modeling efforts particularly include: band bending, non-linear and resonant (Poole-Frenkel) potentials, the extension of one-dimensional theory to multi-dimensional structures, and emission site statistics due to variations in geometry and the presence of adsorbates. Two particular methodologies, namely, the modified Airy approach and metal-semiconductor statistical hyperbolic/ellipsoidal model, are described in detail in their present stage of development.

  4. Low dimensional model of heart rhythm dynamics as a tool for diagnosing the anaerobic threshold

    NASA Astrophysics Data System (ADS)

    Anosov, O. L.; Butkovskii, O. Ya.; Kadtke, J.; Kravtsov, Yu. A.; Protopopescu, V.

    1997-05-01

    We report preliminary results on describing the dependence of the heart rhythm variability on the stress level by using qualitative, low dimensional models. The reconstruction of macroscopic heart models yielding cardio cycles (RR-intervals) duration was based on actual clinical data. Our results show that the coefficients of the low dimensional models are sensitive to metabolic changes. In particular, at the transition between aerobic and aerobic-anaerobic metabolism, there are pronounced extrema in the functional dependence of the coefficients on the stress level. This strong sensitivity can be used to design an easy indirect method for determining the anaerobic threshold. This method could replace costly and invasive traditional methods such as gas analysis and blood tests.

  5. Statistical Systems with Z

    NASA Astrophysics Data System (ADS)

    William, Peter

    In this dissertation several two dimensional statistical systems exhibiting discrete Z(n) symmetries are studied. For this purpose a newly developed algorithm to compute the partition function of these models exactly is utilized. The zeros of the partition function are examined in order to obtain information about the observable quantities at the critical point. This occurs in the form of critical exponents of the order parameters which characterize phenomena at the critical point. The correlation length exponent is found to agree very well with those computed from strong coupling expansions for the mass gap and with Monte Carlo results. In Feynman's path integral formalism the partition function of a statistical system can be related to the vacuum expectation value of the time ordered product of the observable quantities of the corresponding field theoretic model. Hence a generalization of ordinary scale invariance in the form of conformal invariance is focussed upon. This principle is very suitably applicable, in the case of two dimensional statistical models undergoing second order phase transitions at criticality. The conformal anomaly specifies the universality class to which these models belong. From an evaluation of the partition function, the free energy at criticality is computed, to determine the conformal anomaly of these models. The conformal anomaly for all the models considered here are in good agreement with the predicted values.

  6. Extracting Galaxy Cluster Gas Inhomogeneity from X-Ray Surface Brightness: A Statistical Approach and Application to Abell 3667

    NASA Astrophysics Data System (ADS)

    Kawahara, Hajime; Reese, Erik D.; Kitayama, Tetsu; Sasaki, Shin; Suto, Yasushi

    2008-11-01

    Our previous analysis indicates that small-scale fluctuations in the intracluster medium (ICM) from cosmological hydrodynamic simulations follow the lognormal probability density function. In order to test the lognormal nature of the ICM directly against X-ray observations of galaxy clusters, we develop a method of extracting statistical information about the three-dimensional properties of the fluctuations from the two-dimensional X-ray surface brightness. We first create a set of synthetic clusters with lognormal fluctuations around their mean profile given by spherical isothermal β-models, later considering polytropic temperature profiles as well. Performing mock observations of these synthetic clusters, we find that the resulting X-ray surface brightness fluctuations also follow the lognormal distribution fairly well. Systematic analysis of the synthetic clusters provides an empirical relation between the three-dimensional density fluctuations and the two-dimensional X-ray surface brightness. We analyze Chandra observations of the galaxy cluster Abell 3667, and find that its X-ray surface brightness fluctuations follow the lognormal distribution. While the lognormal model was originally motivated by cosmological hydrodynamic simulations, this is the first observational confirmation of the lognormal signature in a real cluster. Finally we check the synthetic cluster results against clusters from cosmological hydrodynamic simulations. As a result of the complex structure exhibited by simulated clusters, the empirical relation between the two- and three-dimensional fluctuation properties calibrated with synthetic clusters when applied to simulated clusters shows large scatter. Nevertheless we are able to reproduce the true value of the fluctuation amplitude of simulated clusters within a factor of 2 from their two-dimensional X-ray surface brightness alone. Our current methodology combined with existing observational data is useful in describing and inferring the statistical properties of the three-dimensional inhomogeneity in galaxy clusters.

  7. Time series analysis for minority game simulations of financial markets

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernando F.; Francisco, Gerson; Machado, Birajara S.; Muruganandam, Paulsamy

    2003-04-01

    The minority game (MG) model introduced recently provides promising insights into the understanding of the evolution of prices, indices and rates in the financial markets. In this paper we perform a time series analysis of the model employing tools from statistics, dynamical systems theory and stochastic processes. Using benchmark systems and a financial index for comparison, several conclusions are obtained about the generating mechanism for this kind of evolution. The motion is deterministic, driven by occasional random external perturbation. When the interval between two successive perturbations is sufficiently large, one can find low dimensional chaos in this regime. However, the full motion of the MG model is found to be similar to that of the first differences of the SP500 index: stochastic, nonlinear and (unit root) stationary.

  8. Large Eddy Simulation of Spatially Developing Turbulent Reacting Shear Layers with the One-Dimensional Turbulence Model

    NASA Astrophysics Data System (ADS)

    Hoffie, Andreas Frank

    Large eddy simulation (LES) combined with the one-dimensional turbulence (ODT) model is used to simulate spatially developing turbulent reacting shear layers with high heat release and high Reynolds numbers. The LES-ODT results are compared to results from direct numerical simulations (DNS), for model development and validation purposes. The LES-ODT approach is based on LES solutions for momentum and pressure on a coarse grid and solutions for momentum and reactive scalars on a fine, one-dimensional, but three-dimensionally coupled ODT subgrid, which is embedded into the LES computational domain. Although one-dimensional, all three velocity components are transported along the ODT domain. The low-dimensional spatial and temporal resolution of the subgrid scales describe a new modeling paradigm, referred to as autonomous microstructure evolution (AME) models, which resolve the multiscale nature of turbulence down to the Kolmogorv scales. While this new concept aims to mimic the turbulent cascade and to reduce the number of input parameters, AME enables also regime-independent combustion modeling, capable to simulate multiphysics problems simultaneously. The LES as well as the one-dimensional transport equations are solved using an incompressible, low Mach number approximation, however the effects of heat release are accounted for through variable density computed by the ideal gas equation of state, based on temperature variations. The computations are carried out on a three-dimensional structured mesh, which is stretched in the transverse direction. While the LES momentum equation is integrated with a third-order Runge-Kutta time-integration, the time integration at the ODT level is accomplished with an explicit Forward-Euler method. Spatial finite-difference schemes of third (LES) and first (ODT) order are utilized and a fully consistent fractional-step method at the LES level is used. Turbulence closure at the LES level is achieved by utilizing the Smagorinsky model. The chemical reaction is simulated with a global single-step, second-order equilibrium reaction with an Arrhenius reaction rate. The two benchmark cases of constant density reacting and variable density non-reacting shear layers used to determine ODT parameters yield perfect agreement with regards to first and second-order flow statistics as well as shear layer growth rate. The variable density non-reacting shear layer also serves as a testing case for the LES-ODT model to simulate passive scalar mixing. The variable density, reacting shear layer cases only agree reasonably well and indicate that more work is necessary to improve variable density coupling of ODT and LES. The disagreement is attributed to the fact that the ODT filtered density is kept constant across the Runge-Kutta steps. Furthermore, a more in-depth knowledge of large scale and subgrid turbulent kinetic energy (TKE) spectra at several downstream locations as well as TKE budgets need to be studied to obtain a better understanding about the model as well as about the flow under investigation. The local Reynolds number based on the one-percent thickness at the exit is Redelta ≈ 5300, for the constant density reacting and for the variable density non-reacting case. For the variable density reacting shear layer, the Reynolds number based on the 1% thickness is Redelta ≈ 2370. The variable density reacting shear layers show suppressed growth rates due to density variations caused by heat release. This has also been reported in literature. A Lewis number parameter study is performed to extract non-unity Lewis number effects. An increase in the Lewis number leads to a further suppression of the growth rate, however to an increase spread of second-order flow statistics. Major focus and challenge of this work is to improve and advance the three-dimensional coupling of the one-dimensional ODT domains while keeping the solution correct. This entails major restructuring of the model. The turbulent reacting shear layer poses a physical challenge to the model because of its nature being a statistically stationary, non-decaying inhomogeneous and anisotropic turbulent flow. This challenge also requires additions to the eddy sampling procedure. Besides physical advancements, the LES-ODT code is also improved regarding its ability to use general cuboid geometries, an array structure that allows to apply boundary conditions based on ghost-cells and non-uniform structured meshes. The use of transverse grid-stretching requires the implementation of the ODT triplet map on a stretched grid. Further, advancing subroutine structure handling with global variables that enable serial code speed-up and parallelization with OpenMP are undertaken. Porting the code to a higher-level language, object oriented, finite-volume based CFD platform, like OpenFoam for example that allows more advanced array and parallelization features with graphics processing units (GPUs) as well as parallelization with the message passing interface (MPI) to simulate complex geometries is recommended for future work.

  9. Universal statistics of vortex tangles in three-dimensional random waves

    NASA Astrophysics Data System (ADS)

    Taylor, Alexander J.

    2018-02-01

    The tangled nodal lines (wave vortices) in random, three-dimensional wavefields are studied as an exemplar of a fractal loop soup. Their statistics are a three-dimensional counterpart to the characteristic random behaviour of nodal domains in quantum chaos, but in three dimensions the filaments can wind around one another to give distinctly different large scale behaviours. By tracing numerically the structure of the vortices, their conformations are shown to follow recent analytical predictions for random vortex tangles with periodic boundaries, where the local disorder of the model ‘averages out’ to produce large scale power law scaling relations whose universality classes do not depend on the local physics. These results explain previous numerical measurements in terms of an explicit effect of the periodic boundaries, where the statistics of the vortices are strongly affected by the large scale connectedness of the system even at arbitrarily high energies. The statistics are investigated primarily for static (monochromatic) wavefields, but the analytical results are further shown to directly describe the reconnection statistics of vortices evolving in certain dynamic systems, or occurring during random perturbations of the static configuration.

  10. Dimensional Reduction for the General Markov Model on Phylogenetic Trees.

    PubMed

    Sumner, Jeremy G

    2017-03-01

    We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.

  11. On some structure-turbulence interaction problems

    NASA Technical Reports Server (NTRS)

    Maekawa, S.; Lin, Y. K.

    1976-01-01

    The interactions between a turbulent flow structure; responding to its excitation were studied. The turbulence was typical of those associated with a boundary layer, having a cross-spectral density indicative of convection and statistical decay. A number of structural models were considered. Among the one-dimensional models were an unsupported infinite beam and a periodically supported infinite beam. The fuselage construction of an aircraft was then considered. For the two-dimensional case a simple membrane was used to illustrate the type of formulation applicable to most two-dimensional structures. Both the one-dimensional and two-dimensional structures studied were backed by a cavity filled with an initially quiescent fluid to simulate the acoustic environment when the structure forms one side of a cabin of a sea vessel or aircraft.

  12. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  13. Computer modelling of grain microstructure in three dimensions

    NASA Astrophysics Data System (ADS)

    Narayan, K. Lakshmi

    We present a program that generates the two-dimensional micrographs of a three dimensional grain microstructure. The code utilizes a novel scanning, pixel mapping technique to secure statistical distributions of surface areas, grain sizes, aspect ratios, perimeters, number of nearest neighbors and volumes of the randomly nucleated particles. The program can be used for comparing the existing theories of grain growth, and interpretation of two-dimensional microstructure of three-dimensional samples. Special features have been included to minimize the computation time and resource requirements.

  14. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  15. High-Dimensional Bayesian Geostatistics

    PubMed Central

    Banerjee, Sudipto

    2017-01-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920

  16. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  17. Data-driven Applications for the Sun-Earth System

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.

    2016-12-01

    Advances in observational and data mining techniques allow extracting information from the large volume of Sun-Earth observational data that can be assimilated into first principles physical models. However, equations governing Sun-Earth phenomena are typically nonlinear, complex, and high-dimensional. The high computational demand of solving the full governing equations over a large range of scales precludes the use of a variety of useful assimilative tools that rely on applied mathematical and statistical techniques for quantifying uncertainty and predictability. Effective use of such tools requires the development of computationally efficient methods to facilitate fusion of data with models. This presentation will provide an overview of various existing as well as newly developed data-driven techniques adopted from atmospheric and oceanic sciences that proved to be useful for space physics applications, such as computationally efficient implementation of Kalman Filter in radiation belts modeling, solar wind gap-filling by Singular Spectrum Analysis, and low-rank procedure for assimilation of low-altitude ionospheric magnetic perturbations into the Lyon-Fedder-Mobarry (LFM) global magnetospheric model. Reduced-order non-Markovian inverse modeling and novel data-adaptive decompositions of Sun-Earth datasets will be also demonstrated.

  18. Exact Local Correlations and Full Counting Statistics for Arbitrary States of the One-Dimensional Interacting Bose Gas

    NASA Astrophysics Data System (ADS)

    Bastianello, Alvise; Piroli, Lorenzo; Calabrese, Pasquale

    2018-05-01

    We derive exact analytic expressions for the n -body local correlations in the one-dimensional Bose gas with contact repulsive interactions (Lieb-Liniger model) in the thermodynamic limit. Our results are valid for arbitrary states of the model, including ground and thermal states, stationary states after a quantum quench, and nonequilibrium steady states arising in transport settings. Calculations for these states are explicitly presented and physical consequences are critically discussed. We also show that the n -body local correlations are directly related to the full counting statistics for the particle-number fluctuations in a short interval, for which we provide an explicit analytic result.

  19. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  20. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  1. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Multifactor-Dimensionality Reduction Reveals High-Order Interactions among Estrogen-Metabolism Genes in Sporadic Breast Cancer

    PubMed Central

    Ritchie, Marylyn D.; Hahn, Lance W.; Roodi, Nady; Bailey, L. Renee; Dupont, William D.; Parl, Fritz F.; Moore, Jason H.

    2001-01-01

    One of the greatest challenges facing human geneticists is the identification and characterization of susceptibility genes for common complex multifactorial human diseases. This challenge is partly due to the limitations of parametric-statistical methods for detection of gene effects that are dependent solely or partially on interactions with other genes and with environmental exposures. We introduce multifactor-dimensionality reduction (MDR) as a method for reducing the dimensionality of multilocus information, to improve the identification of polymorphism combinations associated with disease risk. The MDR method is nonparametric (i.e., no hypothesis about the value of a statistical parameter is made), is model-free (i.e., it assumes no particular inheritance model), and is directly applicable to case-control and discordant-sib-pair studies. Using simulated case-control data, we demonstrate that MDR has reasonable power to identify interactions among two or more loci in relatively small samples. When it was applied to a sporadic breast cancer case-control data set, in the absence of any statistically significant independent main effects, MDR identified a statistically significant high-order interaction among four polymorphisms from three different estrogen-metabolism genes. To our knowledge, this is the first report of a four-locus interaction associated with a common complex multifactorial disease. PMID:11404819

  3. Three-dimensional biomechanical properties of human vocal folds: parameter optimization of a numerical model to match in vitro dynamics.

    PubMed

    Yang, Anxiong; Berry, David A; Kaltenbacher, Manfred; Döllinger, Michael

    2012-02-01

    The human voice signal originates from the vibrations of the two vocal folds within the larynx. The interactions of several intrinsic laryngeal muscles adduct and shape the vocal folds to facilitate vibration in response to airflow. Three-dimensional vocal fold dynamics are extracted from in vitro hemilarynx experiments and fitted by a numerical three-dimensional-multi-mass-model (3DM) using an optimization procedure. In this work, the 3DM dynamics are optimized over 24 experimental data sets to estimate biomechanical vocal fold properties during phonation. Accuracy of the optimization is verified by low normalized error (0.13 ± 0.02), high correlation (83% ± 2%), and reproducible subglottal pressure values. The optimized, 3DM parameters yielded biomechanical variations in tissue properties along the vocal fold surface, including variations in both the local mass and stiffness of vocal folds. That is, both mass and stiffness increased along the superior-to-inferior direction. These variations were statistically analyzed under different experimental conditions (e.g., an increase in tension as a function of vocal fold elongation and an increase in stiffness and a decrease in mass as a function of glottal airflow). The study showed that physiologically relevant vocal fold tissue properties, which cannot be directly measured during in vivo human phonation, can be captured using this 3D-modeling technique. © 2012 Acoustical Society of America

  4. Three-dimensional biomechanical properties of human vocal folds: Parameter optimization of a numerical model to match in vitro dynamics

    PubMed Central

    Yang, Anxiong; Berry, David A.; Kaltenbacher, Manfred; Döllinger, Michael

    2012-01-01

    The human voice signal originates from the vibrations of the two vocal folds within the larynx. The interactions of several intrinsic laryngeal muscles adduct and shape the vocal folds to facilitate vibration in response to airflow. Three-dimensional vocal fold dynamics are extracted from in vitro hemilarynx experiments and fitted by a numerical three-dimensional-multi-mass-model (3DM) using an optimization procedure. In this work, the 3DM dynamics are optimized over 24 experimental data sets to estimate biomechanical vocal fold properties during phonation. Accuracy of the optimization is verified by low normalized error (0.13 ± 0.02), high correlation (83% ± 2%), and reproducible subglottal pressure values. The optimized, 3DM parameters yielded biomechanical variations in tissue properties along the vocal fold surface, including variations in both the local mass and stiffness of vocal folds. That is, both mass and stiffness increased along the superior-to-inferior direction. These variations were statistically analyzed under different experimental conditions (e.g., an increase in tension as a function of vocal fold elongation and an increase in stiffness and a decrease in mass as a function of glottal airflow). The study showed that physiologically relevant vocal fold tissue properties, which cannot be directly measured during in vivo human phonation, can be captured using this 3D-modeling technique. PMID:22352511

  5. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  6. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  7. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  8. Statistical Mechanics of Prion Diseases

    NASA Astrophysics Data System (ADS)

    Slepoy, A.; Singh, R. R.; Pázmándi, F.; Kulkarni, R. V.; Cox, D. L.

    2001-07-01

    We present a two-dimensional, lattice based, protein-level statistical mechanical model for prion diseases (e.g., mad cow disease) with concomitant prion protein misfolding and aggregation. Our studies lead us to the hypothesis that the observed broad incubation time distribution in epidemiological data reflect fluctuation dominated growth seeded by a few nanometer scale aggregates, while much narrower incubation time distributions for innoculated lab animals arise from statistical self-averaging. We model ``species barriers'' to prion infection and assess a related treatment protocol.

  9. Low dimensional model of heart rhythm dynamics as a tool for diagnosing the anaerobic threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anosov, O.L.; Butkovskii, O.Y.; Kadtke, J.

    We report preliminary results on describing the dependence of the heart rhythm variability on the stress level by using qualitative, low dimensional models. The reconstruction of macroscopic heart models yielding cardio cycles (RR-intervals) duration was based on actual clinical data. Our results show that the coefficients of the low dimensional models are sensitive to metabolic changes. In particular, at the transition between aerobic and aerobic-anaerobic metabolism, there are pronounced extrema in the functional dependence of the coefficients on the stress level. This strong sensitivity can be used to design an easy indirect method for determining the anaerobic threshold. This methodmore » could replace costly and invasive traditional methods such as gas analysis and blood tests. {copyright} {ital 1997 American Institute of Physics.}« less

  10. Predicting Viral Infection From High-Dimensional Biomarker Trajectories

    PubMed Central

    Chen, Minhua; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S.; Lucas, Joseph; Dunson, David; Carin, Lawrence

    2013-01-01

    There is often interest in predicting an individual’s latent health status based on high-dimensional biomarkers that vary over time. Motivated by time-course gene expression array data that we have collected in two influenza challenge studies performed with healthy human volunteers, we develop a novel time-aligned Bayesian dynamic factor analysis methodology. The time course trajectories in the gene expressions are related to a relatively low-dimensional vector of latent factors, which vary dynamically starting at the latent initiation time of infection. Using a nonparametric cure rate model for the latent initiation times, we allow selection of the genes in the viral response pathway, variability among individuals in infection times, and a subset of individuals who are not infected. As we demonstrate using held-out data, this statistical framework allows accurate predictions of infected individuals in advance of the development of clinical symptoms, without labeled data and even when the number of biomarkers vastly exceeds the number of individuals under study. Biological interpretation of several of the inferred pathways (factors) is provided. PMID:23704802

  11. Comparisons of Three-Dimensional Variational Data Assimilation and Model Output Statistics in Improving Atmospheric Chemistry Forecasts

    NASA Astrophysics Data System (ADS)

    Ma, Chaoqun; Wang, Tijian; Zang, Zengliang; Li, Zhijin

    2018-07-01

    Atmospheric chemistry models usually perform badly in forecasting wintertime air pollution because of their uncertainties. Generally, such uncertainties can be decreased effectively by techniques such as data assimilation (DA) and model output statistics (MOS). However, the relative importance and combined effects of the two techniques have not been clarified. Here, a one-month air quality forecast with the Weather Research and Forecasting-Chemistry (WRF-Chem) model was carried out in a virtually operational setup focusing on Hebei Province, China. Meanwhile, three-dimensional variational (3DVar) DA and MOS based on one-dimensional Kalman filtering were implemented separately and simultaneously to investigate their performance in improving the model forecast. Comparison with observations shows that the chemistry forecast with MOS outperforms that with 3DVar DA, which could be seen in all the species tested over the whole 72 forecast hours. Combined use of both techniques does not guarantee a better forecast than MOS only, with the improvements and degradations being small and appearing rather randomly. Results indicate that the implementation of MOS is more suitable than 3DVar DA in improving the operational forecasting ability of WRF-Chem.

  12. A Multidimensional Scaling Approach to Dimensionality Assessment for Measurement Instruments Modeled by Multidimensional Item Response Theory

    ERIC Educational Resources Information Center

    Toro, Maritsa

    2011-01-01

    The statistical assessment of dimensionality provides evidence of the underlying constructs measured by a survey or test instrument. This study focuses on educational measurement, specifically tests comprised of items described as multidimensional. That is, items that require examinee proficiency in multiple content areas and/or multiple cognitive…

  13. Collective Behaviors in Spatially Extended Systems with Local Interactions and Synchronous Updating

    NASA Astrophysics Data System (ADS)

    ChatÉ, H.; Manneville, P.

    1992-01-01

    Assessing the extent to which dynamical systems with many degrees of freedom can be described within a thermodynamics formalism is a problem that currently attracts much attention. In this context, synchronously updated regular lattices of identical, chaotic elements with local interactions are promising models for which statistical mechanics may be hoped to provide some insights. This article presents a large class of cellular automata rules and coupled map lattices of the above type in space dimensions d = 2 to 6.Such simple models can be approached by a mean-field approximation which usually reduces the dynamics to that of a map governing the evolution of some extensive density. While this approximation is exact in the d = infty limit, where macroscopic variables must display the time-dependent behavior of the mean-field map, basic intuition from equilibrium statistical mechanics rules out any such behavior in a low-dimensional systems, since it would involve the collective motion of locally disordered elements.The models studied are chosen to be as close as possible to mean-field conditions, i.e., rather high space dimension, large connectivity, and equal-weight coupling between sites. While the mean-field evolution is never observed, a new type of non-trivial collective behavior is found, at odds with the predictions of equilibrium statistical mechanics. Both in the cellular automata models and in the coupled map lattices, macroscopic variables frequently display a non-transient, time-dependent, low-dimensional dynamics emerging out of local disorder. Striking examples are period 3 cycles in two-state cellular automata and a Hopf bifurcation for a d = 5 lattice of coupled logistic maps. An extensive account of the phenomenology is given, including a catalog of behaviors, classification tables for the celular automata rules, and bifurcation diagrams for the coupled map lattices.The observed underlying dynamics is accompanied by an intrinsic quasi-Gaussian noise (stemming from the local disorder) which disappears in the infinite-size limit. The collective behaviors constitute a robust phenomenon, resisting external noise, small changes in the local dynamics, and modifications of the initial and boundary conditions. Synchronous updating, high space dimension and the regularity of connections are shown to be crucial ingredients in the subtle build-up of correlations giving rise to the collective motion. The discussion stresses the need for a theoretical understanding that neither equilibrium statistical mechanics nor higher-order mean-field approximations are able to provide.

  14. Phases and approximations of baryonic popcorn in a low-dimensional analogue of holographic QCD

    NASA Astrophysics Data System (ADS)

    Elliot-Ripley, Matthew

    2015-07-01

    The Sakai-Sugimoto model is the most pre-eminent model of holographic QCD, in which baryons correspond to topological solitons in a five-dimensional bulk spacetime. Recently it has been shown that a single soliton in this model can be well approximated by a flat-space self-dual Yang-Mills instanton with a small size, although studies of multi-solitons and solitons at finite density are currently beyond numerical computations. A lower-dimensional analogue of the model has also been studied in which the Sakai-Sugimoto soliton is replaced by a baby Skyrmion in three spacetime dimensions with a warped metric. The lower dimensionality of this model means that full numerical field calculations are possible, and static multi-solitons and solitons at finite density were both investigated, in particular the baryonic popcorn phase transitions at high densities. Here we present and investigate an alternative lower-dimensional analogue of the Sakai-Sugimoto model in which the Sakai-Sugimoto soliton is replaced by an O(3)-sigma model instanton in a warped three-dimensional spacetime stabilized by a massive vector meson. A more detailed range of baryonic popcorn phase transitions are found, and the low-dimensional model is used as a testing ground to check the validity of common approximations made in the full five-dimensional model, namely approximating fields using their flat-space equations of motion, and performing a leading order expansion in the metric.

  15. Statistical Machine Learning for Structured and High Dimensional Data

    DTIC Science & Technology

    2014-09-17

    AFRL-OSR-VA-TR-2014-0234 STATISTICAL MACHINE LEARNING FOR STRUCTURED AND HIGH DIMENSIONAL DATA Larry Wasserman CARNEGIE MELLON UNIVERSITY Final...Re . 8-98) v Prescribed by ANSI Std. Z39.18 14-06-2014 Final Dec 2009 - Aug 2014 Statistical Machine Learning for Structured and High Dimensional...area of resource-constrained statistical estimation. machine learning , high-dimensional statistics U U U UU John Lafferty 773-702-3813 > Research under

  16. Experimental Validation of Plastic Mandible Models Produced by a "Low-Cost" 3-Dimensional Fused Deposition Modeling Printer.

    PubMed

    Maschio, Federico; Pandya, Mirali; Olszewski, Raphael

    2016-03-22

    The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field.

  17. Experimental Validation of Plastic Mandible Models Produced by a “Low-Cost” 3-Dimensional Fused Deposition Modeling Printer

    PubMed Central

    Maschio, Federico; Pandya, Mirali; Olszewski, Raphael

    2016-01-01

    Background The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Material/Methods Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. Results The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Conclusions Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field. PMID:27003456

  18. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  19. Estimating the functional dimensionality of neural representations.

    PubMed

    Ahlheim, Christiane; Love, Bradley C

    2018-06-07

    Recent advances in multivariate fMRI analysis stress the importance of information inherent to voxel patterns. Key to interpreting these patterns is estimating the underlying dimensionality of neural representations. Dimensions may correspond to psychological dimensions, such as length and orientation, or involve other coding schemes. Unfortunately, the noise structure of fMRI data inflates dimensionality estimates and thus makes it difficult to assess the true underlying dimensionality of a pattern. To address this challenge, we developed a novel approach to identify brain regions that carry reliable task-modulated signal and to derive an estimate of the signal's functional dimensionality. We combined singular value decomposition with cross-validation to find the best low-dimensional projection of a pattern of voxel-responses at a single-subject level. Goodness of the low-dimensional reconstruction is measured as Pearson correlation with a test set, which allows to test for significance of the low-dimensional reconstruction across participants. Using hierarchical Bayesian modeling, we derive the best estimate and associated uncertainty of underlying dimensionality across participants. We validated our method on simulated data of varying underlying dimensionality, showing that recovered dimensionalities match closely true dimensionalities. We then applied our method to three published fMRI data sets all involving processing of visual stimuli. The results highlight three possible applications of estimating the functional dimensionality of neural data. Firstly, it can aid evaluation of model-based analyses by revealing which areas express reliable, task-modulated signal that could be missed by specific models. Secondly, it can reveal functional differences across brain regions. Thirdly, knowing the functional dimensionality allows assessing task-related differences in the complexity of neural patterns. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Laser Metalworking Technology Transfer.

    DTIC Science & Technology

    1986-01-01

    TI 59 programmable calculator /printer...the .4 one-dimensional heat flow model and should not be used for low processing speed. The program is written for use on a Texas Instrument TI 59 programmable calculator with...speed range, and a three-dimensional model for the low speed ranges. The program is written for use on a Texas Instrument TI 59 . * programmable calculator

  1. One-dimensional pion, kaon, and proton femtoscopy in Pb-Pb collisions at s NN = 2.76 TeV

    DOE PAGES

    Adam, J.; Adamová, D.; Aggarwal, M. M.; ...

    2015-11-19

    Tmore » he size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb and strong final state interactions. In this paper, results are presented from femtoscopic analyses of π ± π ±, K ± K ±, K$$0\\atop{S}$$K$$0\\atop{S}$$, pp , and $$\\overline{p}$$ $$\\overline{p}$$ correlations from Pb-Pb collisions at s NN = 2.76 eV by the ALICE experiment at the LHC. One-dimensional radii of the system are extracted from correlation functions in terms of the invariant momentum difference of the pair. he comparison of the measured radii with the predictions from a hydrokinetic model is discussed. he pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass m which is consistent with hydrodynamic model predictions for central collisions. Lastly, the kaon and proton source sizes can be reasonably described by approximate m scaling.« less

  2. One-dimensional pion, kaon, and proton femtoscopy in Pb-Pb collisions at √{sNN}=2.76 TeV

    NASA Astrophysics Data System (ADS)

    Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aimo, I.; Aiola, S.; Ajaz, M.; Akindinov, A.; Alam, S. N.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anielski, J.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Armesto, N.; Arnaldi, R.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Bach, M.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Baltasar Dos Santos Pedrosa, F.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blanco, F.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botje, M.; Botta, E.; Böttger, S.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Cavicchioli, C.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Chunhui, Z.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa Del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; de, S.; de Caro, A.; de Cataldo, G.; de Cuveland, J.; de Falco, A.; de Gruttola, D.; De Marco, N.; de Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; D'Erasmo, G.; di Bari, D.; di Mauro, A.; di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Dobrowolski, T.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Engel, H.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Eschweiler, D.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Felea, D.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Gomez Ramirez, A.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Grosse-Oetringhaus, J. F.; Grossiord, J.-Y.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gulkanyan, H.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hansen, A.; Harris, J. W.; Hartmann, H.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Heide, M.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hilden, T. E.; Hillemanns, H.; Hippolyte, B.; Hosokawa, R.; Hristov, P.; Huang, M.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Ilkiv, I.; Inaba, M.; Ionita, C.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacobs, P. M.; Jadlovska, S.; Jahnke, C.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jung, H.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, K. H.; Khan, M. M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, B.; Kim, D. W.; Kim, D. J.; Kim, H.; Kim, J. S.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobayashi, T.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Kox, S.; Koyithatta Meethaleveedu, G.; Kral, J.; Králik, I.; Kravčáková, A.; Krelina, M.; Kretz, M.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kugathasan, T.; Kuhn, C.; Kuijer, P. G.; Kulakov, I.; Kumar, J.; Kumar, L.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Legrand, I.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loggins, V. R.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Luz, P. H. F. N. D.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manceau, L.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Martynov, Y.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Mastroserio, A.; Masui, H.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; McDonald, D.; Meddi, F.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Minervini, L. M.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Morando, M.; Moreira de Godoy, D. A.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Murray, S.; Musa, L.; Musinsky, J.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Nattrass, C.; Nayak, K.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, P.; Paić, G.; Pajares, C.; Pal, S. K.; Pan, J.; Pandey, A. K.; Pant, D.; Papcun, P.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Peitzmann, T.; Pereira da Costa, H.; Pereira de Oliveira Filho, E.; Peresunko, D.; Pérez Lara, C. E.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Real, J. S.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Rettig, F.; Revol, J.-P.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rivetti, A.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Romita, R.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salgado, C. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sanchez Castro, X.; Šándor, L.; Sandoval, A.; Sano, M.; Santagati, G.; Sarkar, D.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schuster, T.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Seeder, K. S.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Seo, J.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, N.; Shigaki, K.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Søgaard, C.; Soltz, R.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stefanek, G.; Steinpreis, M.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Sultanov, R.; Šumbera, M.; Symons, T. J. M.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Takahashi, J.; Tanaka, N.; Tangaro, M. A.; Tapia Takaki, J. D.; Tarantola Peloni, A.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vajzer, M.; Vala, M.; Valencia Palomo, L.; Vallero, S.; van der Maarel, J.; van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Venaruzzo, M.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Vyushin, A.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Wang, Y.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Wessels, J. P.; Westerhoff, U.; Wiechula, J.; Wikne, J.; Wilde, M.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yaldo, C. G.; Yang, H.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.; Alice Collaboration

    2015-11-01

    The size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb and strong final state interactions. In this paper, results are presented from femtoscopic analyses of π±π±,K±K±,KS0KS0,p p , and p ¯p ¯ correlations from Pb-Pb collisions at √{sNN}=2.76 TeV by the ALICE experiment at the LHC. One-dimensional radii of the system are extracted from correlation functions in terms of the invariant momentum difference of the pair. The comparison of the measured radii with the predictions from a hydrokinetic model is discussed. The pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass mT which is consistent with hydrodynamic model predictions for central collisions. The kaon and proton source sizes can be reasonably described by approximate mT scaling.

  3. Nonequilibrium critical behavior of model statistical systems and methods for the description of its features

    NASA Astrophysics Data System (ADS)

    Prudnikov, V. V.; Prudnikov, P. V.; Mamonova, M. V.

    2017-11-01

    This paper reviews features in critical behavior of far-from-equilibrium macroscopic systems and presents current methods of describing them by referring to some model statistical systems such as the three-dimensional Ising model and the two-dimensional XY model. The paper examines the critical relaxation of homogeneous and structurally disordered systems subjected to abnormally strong fluctuation effects involved in ordering processes in solids at second-order phase transitions. Interest in such systems is due to the aging properties and fluctuation-dissipation theorem violations predicted for and observed in systems slowly evolving from a nonequilibrium initial state. It is shown that these features of nonequilibrium behavior show up in the magnetic properties of magnetic superstructures consisting of alternating nanoscale-thick magnetic and nonmagnetic layers and can be observed not only near the film’s critical ferromagnetic ordering temperature Tc, but also over the wide temperature range T ⩽ Tc.

  4. Learning multivariate distributions by competitive assembly of marginals.

    PubMed

    Sánchez-Vega, Francisco; Younes, Laurent; Geman, Donald

    2013-02-01

    We present a new framework for learning high-dimensional multivariate probability distributions from estimated marginals. The approach is motivated by compositional models and Bayesian networks, and designed to adapt to small sample sizes. We start with a large, overlapping set of elementary statistical building blocks, or "primitives," which are low-dimensional marginal distributions learned from data. Each variable may appear in many primitives. Subsets of primitives are combined in a Lego-like fashion to construct a probabilistic graphical model; only a small fraction of the primitives will participate in any valid construction. Since primitives can be precomputed, parameter estimation and structure search are separated. Model complexity is controlled by strong biases; we adapt the primitives to the amount of training data and impose rules which restrict the merging of them into allowable compositions. The likelihood of the data decomposes into a sum of local gains, one for each primitive in the final structure. We focus on a specific subclass of networks which are binary forests. Structure optimization corresponds to an integer linear program and the maximizing composition can be computed for reasonably large numbers of variables. Performance is evaluated using both synthetic data and real datasets from natural language processing and computational biology.

  5. Development of a two-dimensional zonally averaged statistical-dynamical model. III - The parameterization of the eddy fluxes of heat and moisture

    NASA Technical Reports Server (NTRS)

    Stone, Peter H.; Yao, Mao-Sung

    1990-01-01

    A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.

  6. Correlation Dimension Estimates of Global and Local Temperature Data.

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    1995-11-01

    The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.

  7. Disorder-induced transparency in a one-dimensional waveguide side coupled with optical cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yongyou, E-mail: yyzhang@bit.edu.cn; Dong, Guangda; Zou, Bingsuo

    2014-05-07

    Disorder influence on photon transmission behavior is theoretically studied in a one-dimensional waveguide side coupled with a series of optical cavities. For this sake, we propose a concept of disorder-induced transparency appearing on the low-transmission spectral background. Two kinds of disorders, namely, disorders of optical cavity eigenfrequencies and relative phases in the waveguide side coupled with optical cavities are considered to show the disorder-induced transparency. They both can induce the optical transmission peaks on the low-transmission backgrounds. The statistical mean value of the transmission also increases with increasing the disorders of the cavity eigenfrequencies and relative phases.

  8. Columnar organization of orientation domains in V1

    NASA Astrophysics Data System (ADS)

    Liedtke, Joscha; Wolf, Fred

    In the primary visual cortex (V1) of primates and carnivores, the functional architecture of basic stimulus selectivities appears similar across cortical layers (Hubel & Wiesel, 1962) justifying the use of two-dimensional cortical models and disregarding organization in the third dimension. Here we show theoretically that already small deviations from an exact columnar organization lead to non-trivial three-dimensional functional structures. We extend two-dimensional random field models (Schnabel et al., 2007) to a three-dimensional cortex by keeping a typical scale in each layer and introducing a correlation length in the third, columnar dimension. We examine in detail the three-dimensional functional architecture for different cortical geometries with different columnar correlation lengths. We find that (i) topological defect lines are generally curved and (ii) for large cortical curvatures closed loops and reconnecting topological defect lines appear. This theory extends the class of random field models by introducing a columnar dimension and provides a systematic statistical assessment of the three-dimensional functional architecture of V1 (see also (Tanaka et al., 2011)).

  9. Non-equilibrium statistical mechanics theory for the large scales of geophysical flows

    NASA Astrophysics Data System (ADS)

    Eric, S.; Bouchet, F.

    2010-12-01

    The aim of any theory of turbulence is to understand the statistical properties of the velocity field. As a huge number of degrees of freedom is involved, statistical mechanics is a natural approach. The self-organization of two-dimensional and geophysical turbulent flows is addressed based on statistical mechanics methods. We discuss classical and recent works on this subject; from the statistical mechanics basis of the theory up to applications to Jupiter’s troposphere and ocean vortices and jets. The equilibrium microcanonical measure is built from the Liouville theorem. Important statistical mechanics concepts (large deviations, mean field approach) and thermodynamic concepts (ensemble inequivalence, negative heat capacity) are briefly explained and used to predict statistical equilibria for turbulent flows. This is applied to make quantitative models of two-dimensional turbulence, the Great Red Spot and other Jovian vortices, ocean jets like the Gulf-Stream, and ocean vortices. A detailed comparison between these statistical equilibria and real flow observations will be discussed. We also present recent results for non-equilibrium situations, for which forces and dissipation are in a statistical balance. As an example, the concept of phase transition allows us to describe drastic changes of the whole system when a few external parameters are changed. F. Bouchet and E. Simonnet, Random Changes of Flow Topology in Two-Dimensional and Geophysical Turbulence, Physical Review Letters 102 (2009), no. 9, 094504-+. F. Bouchet and J. Sommeria, Emergence of intense jets and Jupiter's Great Red Spot as maximum-entropy structures, Journal of Fluid Mechanics 464 (2002), 165-207. A. Venaille and F. Bouchet, Ocean rings and jets as statistical equilibrium states, submitted to JPO F. Bouchet and A. Venaille, Statistical mechanics of two-dimensional and geophysical flows, submitted to Physics Reports Non-equilibrium phase transitions for the 2D Navier-Stokes equations with stochastic forces (time series and probability density functions (PDFs) of the modulus of the largest scale Fourrier component, showing bistability between dipole and unidirectional flows). This bistability is predicted by statistical mechanics.

  10. An intermediate-scale model for thermal hydrology in low-relief permafrost-affected landscapes

    DOE PAGES

    Jan, Ahmad; Coon, Ethan T.; Painter, Scott L.; ...

    2017-07-10

    Integrated surface/subsurface models for simulating the thermal hydrology of permafrost-affected regions in a warming climate have recently become available, but computational demands of those new process-rich simu- lation tools have thus far limited their applications to one-dimensional or small two-dimensional simulations. We present a mixed-dimensional model structure for efficiently simulating surface/subsurface thermal hydrology in low-relief permafrost regions at watershed scales. The approach replaces a full three-dimensional system with a two-dimensional overland thermal hydrology system and a family of one-dimensional vertical columns, where each column represents a fully coupled surface/subsurface thermal hydrology system without lateral flow. The system is then operatormore » split, sequentially updating the overland flow system without sources and the one-dimensional columns without lateral flows. We show that the app- roach is highly scalable, supports subcycling of different processes, and compares well with the corresponding fully three-dimensional representation at significantly less computational cost. Those advances enable recently developed representations of freezing soil physics to be coupled with thermal overland flow and surface energy balance at scales of 100s of meters. Furthermore developed and demonstrated for permafrost thermal hydrology, the mixed-dimensional model structure is applicable to integrated surface/subsurface thermal hydrology in general.« less

  11. Development of three-dimensional hollow elastic model for cerebral aneurysm clipping simulation enabling rapid and low cost prototyping.

    PubMed

    Mashiko, Toshihiro; Otani, Keisuke; Kawano, Ryutaro; Konno, Takehiko; Kaneko, Naoki; Ito, Yumiko; Watanabe, Eiju

    2015-03-01

    We developed a method for fabricating a three-dimensional hollow and elastic aneurysm model useful for surgical simulation and surgical training. In this article, we explain the hollow elastic model prototyping method and report on the effects of applying it to presurgical simulation and surgical training. A three-dimensional printer using acrylonitrile-butadiene-styrene as a modeling material was used to produce a vessel model. The prototype was then coated with liquid silicone. After the silicone had hardened, the acrylonitrile-butadiene-styrene was melted with xylene and removed, leaving an outer layer as a hollow elastic model. Simulations using the hollow elastic model were performed in 12 patients. In all patients, the clipping proceeded as scheduled. The surgeon's postoperative assessment was favorable in all cases. This method enables easy fabrication at low cost. Simulation using the hollow elastic model is thought to be useful for understanding of three-dimensional aneurysm structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Scientific data interpolation with low dimensional manifold model

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  13. Nonlinear damping model for flexible structures. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Zang, Weijian

    1990-01-01

    The study of nonlinear damping problem of flexible structures is addressed. Both passive and active damping, both finite dimensional and infinite dimensional models are studied. In the first part, the spectral density and the correlation function of a single DOF nonlinear damping model is investigated. A formula for the spectral density is established with O(Gamma(sub 2)) accuracy based upon Fokker-Planck technique and perturbation. The spectral density depends upon certain first order statistics which could be obtained if the stationary density is known. A method is proposed to find the approximate stationary density explicitly. In the second part, the spectral density of a multi-DOF nonlinear damping model is investigated. In the third part, energy type nonlinear damping model in an infinite dimensional setting is studied.

  14. The Shock and Vibration Digest. Volume 16, Number 3

    DTIC Science & Technology

    1984-03-01

    Fluid-induced Statistical Energy Analysis Method excitation, Wind tunnel testing V.R. Miller and L.L. Faulkner Flight Dynamics Lab., Air Force...84475 wall by the statistical energy analysis (SEA) method. The fuselage structure is represented as a series of curved, iso- Probabilistic Fracture...heavy are demonstrated in three-dimensional form. floor, a statistical energy analysis (SEA) model is presented. Only structural systems (i.e., no

  15. Peculiar spectral statistics of ensembles of trees and star-like graphs

    NASA Astrophysics Data System (ADS)

    Kovaleva, V.; Maximov, Yu; Nechaev, S.; Valba, O.

    2017-07-01

    In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the ‘Lifshitz singularity’ emerging in the one-dimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However, the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, reflecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of an ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.

  16. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  17. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  18. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  19. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  20. Statistical mechanics explanation for the structure of ocean eddies and currents

    NASA Astrophysics Data System (ADS)

    Venaille, A.; Bouchet, F.

    2010-12-01

    The equilibrium statistical mechanics of two dimensional and geostrophic flows predicts the outcome for the large scales of the flow, resulting from the turbulent mixing. This theory has been successfully applied to describe detailed properties of Jupiter's Great Red Spot. We discuss the range of applicability of this theory to ocean dynamics. It is able to reproduce mesoscale structures like ocean rings. It explains, from statistical mechanics, the westward drift of rings at the speed of non dispersive baroclinic waves, and the recently observed (Chelton and col.) slower northward drift of cyclonic eddies and southward drift of anticyclonic eddies. We also uncover relations between strong eastward mid-basin inertial jets, like the Kuroshio extension and the Gulf Stream, and statistical equilibria. We explain under which conditions such strong mid-basin jets can be understood as statistical equilibria. We claim that these results are complementary to the classical Sverdrup-Munk theory: they explain the inertial part basin dynamics, the jets structure and location, using very simple theoretical arguments. References: A. VENAILLE and F. BOUCHET, Ocean rings and jets as statistical equilibrium states, submitted to JPO F. BOUCHET and A. VENAILLE, Statistical mechanics of two-dimensional and geophysical flows, arxiv ...., submitted to Physics Reports P. BERLOFF, A. M. HOGG, W. DEWAR, The Turbulent Oscillator: A Mechanism of Low- Frequency Variability of the Wind-Driven Ocean Gyres, Journal of Physical Oceanography 37 (2007) 2363-+. D. B. CHELTON, M. G. SCHLAX, R. M. SAMELSON, R. A. de SZOEKE, Global observations of large oceanic eddies, Geo. Res. Lett.34 (2007) 15606-+ b) and c) are snapshots of streamfunction and potential vorticity (red: positive values; blue: negative values) in the upper layer of a three layer quasi-geostrophic model of a mid-latitude ocean basin (from Berloff and co.). a) Streamfunction predicted by statistical mechanics. Even in an out-equilibrium situation like this one, equilibrium statistical mechanics predicts remarkably the overall qualitative flow structure. Observation of westward drift of ocean eddies and of slower northward drift of cyclones and southward drift of anticyclones by Chelton and co. We explain these observations from statistical mechanics.

  1. A collective phase in resource competition in a highly diverse ecosystem

    NASA Astrophysics Data System (ADS)

    Tikhonov, Mikhail; Monasson, Remi

    Recent technological advances uncovered that most habitats, including the human body, harbor hundreds of coexisting microbial ``species''. The problem of understanding such complex communities is currently at the forefront of medical and environmental sciences. A particularly intriguing question is whether the high-diversity regime (large number of species N) gives rise to qualitatively novel phenomena that could not be intuited from analysis of low-dimensional models (with few species). However, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to show that the large- N limit of a classic ecological model of resource competition introduced by MacArthur in 1969 can be solved analytically. Our results provide a tractable model where the implications of large dimensionality of eco-evolutionary problems can be investigated. In particular, we show that at high diversity, the MacArthur model exhibits a phase transition into a curious regime where the environment constructed by the community becomes a collective property, insensitive to the external conditions such as the total resource influx supplied to the community. Supported by Harvard Center of Mathematical Sciences and Applications, and the Simons Foundation. This work was completed at the Aspen Center for Physics, supported by National Science Foundation Grant PHY-1066293.

  2. Foundations of modelling of nonequilibrium low-temperature plasmas

    NASA Astrophysics Data System (ADS)

    Alves, L. L.; Bogaerts, A.; Guerra, V.; Turner, M. M.

    2018-02-01

    This work explains the need for plasma models, introduces arguments for choosing the type of model that better fits the purpose of each study, and presents the basics of the most common nonequilibrium low-temperature plasma models and the information available from each one, along with an extensive list of references for complementary in-depth reading. The paper presents the following models, organised according to the level of multi-dimensional description of the plasma: kinetic models, based on either a statistical particle-in-cell/Monte-Carlo approach or the solution to the Boltzmann equation (in the latter case, special focus is given to the description of the electron kinetics); multi-fluid models, based on the solution to the hydrodynamic equations; global (spatially-average) models, based on the solution to the particle and energy rate-balance equations for the main plasma species, usually including a very complete reaction chemistry; mesoscopic models for plasma-surface interaction, adopting either a deterministic approach or a stochastic dynamical Monte-Carlo approach. For each plasma model, the paper puts forward the physics context, introduces the fundamental equations, presents advantages and limitations, also from a numerical perspective, and illustrates its application with some examples. Whenever pertinent, the interconnection between models is also discussed, in view of multi-scale hybrid approaches.

  3. Phonons in two-dimensional soft colloidal crystals.

    PubMed

    Chen, Ke; Still, Tim; Schoenholz, Samuel; Aptowicz, Kevin B; Schindler, Michael; Maggs, A C; Liu, Andrea J; Yodh, A G

    2013-08-01

    The vibrational modes of pristine and polycrystalline monolayer colloidal crystals composed of thermosensitive microgel particles are measured using video microscopy and covariance matrix analysis. At low frequencies, the Debye relation for two-dimensional harmonic crystals is observed in both crystal types; at higher frequencies, evidence for van Hove singularities in the phonon density of states is significantly smeared out by experimental noise and measurement statistics. The effects of these errors are analyzed using numerical simulations. We introduce methods to correct for these limitations, which can be applied to disordered systems as well as crystalline ones, and we show that application of the error correction procedure to the experimental data leads to more pronounced van Hove singularities in the pristine crystal. Finally, quasilocalized low-frequency modes in polycrystalline two-dimensional colloidal crystals are identified and demonstrated to correlate with structural defects such as dislocations, suggesting that quasilocalized low-frequency phonon modes may be used to identify local regions vulnerable to rearrangements in crystalline as well as amorphous solids.

  4. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  5. Bayesian Analysis of High Dimensional Classification

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Subhadeep; Liang, Faming

    2009-12-01

    Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.

  6. ASCS online fault detection and isolation based on an improved MPCA

    NASA Astrophysics Data System (ADS)

    Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan

    2014-09-01

    Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.

  7. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  8. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  9. A low dimensional dynamical system for the wall layer

    NASA Technical Reports Server (NTRS)

    Aubry, N.; Keefe, L. R.

    1987-01-01

    Low dimensional dynamical systems which model a fully developed turbulent wall layer were derived.The model is based on the optimally fast convergent proper orthogonal decomposition, or Karhunen-Loeve expansion. This decomposition provides a set of eigenfunctions which are derived from the autocorrelation tensor at zero time lag. Via Galerkin projection, low dimensional sets of ordinary differential equations in time, for the coefficients of the expansion, were derived from the Navier-Stokes equations. The energy loss to the unresolved modes was modeled by an eddy viscosity representation, analogous to Heisenberg's spectral model. A set of eigenfunctions and eigenvalues were obtained from direct numerical simulation of a plane channel at a Reynolds number of 6600, based on the mean centerline velocity and the channel width flow and compared with previous work done by Herzog. Using the new eigenvalues and eigenfunctions, a new ten dimensional set of ordinary differential equations were derived using five non-zero cross-stream Fourier modes with a periodic length of 377 wall units. The dynamical system was integrated for a range of the eddy viscosity prameter alpha. This work is encouraging.

  10. Scientific data interpolation with low dimensional manifold model

    DOE PAGES

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...

    2017-09-28

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  11. Scientific data interpolation with low dimensional manifold model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  12. Adaptive wall research with two- and three-dimensional models in low speed and transonic tunnels

    NASA Technical Reports Server (NTRS)

    Lewis, M. C.; Neal, G.; Goodyer, M. J.

    1988-01-01

    This paper summarises recent research at the University of Southampton into adaptive wall technology and outlines the direction of current efforts. The work is aimed at developing techniques for use in test sections where the top and bottom walls may be adjusted in single curvature. Wall streamlining eliminates, as far as experimentally possible, the top and bottom wall interference in low speed and transonic aerofoil testing. A streamlining technique has been developed for low speeds which allows the testing of swept wing panels in low interference environments. At higher speeds, a comparison of several two-dimensional transonic streamlining algorithms has been made and a technique for streamlining with a choked test section has also been developed. Three-dimensional work has mainly concentrated on tests of sidewall mounted half-wings and the development of the software packages required to assess interference and to adjust the flexible walls. It has been demonstrated that two-dimensional wall adaptation can significantly modify the level of wall interference around relatively large three-dimensional models. The residual interferences are small and are probably amenable to standard post-test correction methods. Tests on a calibrated wing-body model are planned in the near future to further validate the proposed streamlining technique.

  13. Synchrotron radiation μCT and histology evaluation of bone-to-implant contact.

    PubMed

    Neldam, Camilla Albeck; Sporring, Jon; Rack, Alexander; Lauridsen, Torsten; Hauge, Ellen-Margrethe; Jørgensen, Henrik L; Jørgensen, Niklas Rye; Feidenhansl, Robert; Pinholt, Else Marie

    2017-09-01

    The purpose of this study was to evaluate bone-to-implant contact (BIC) in two-dimensional (2D) histology compared to high-resolution three-dimensional (3D) synchrotron radiation micro computed tomography (SR micro-CT). High spatial resolution, excellent signal-to-noise ratio, and contrast establish SR micro-CT as the leading imaging modality for hard X-ray microtomography. Using SR micro-CT at voxel size 5 μm in an experimental goat mandible model, no statistically significant difference was found between the different treatment modalities nor between recipient and reconstructed bone. The histological evaluation showed a statistically significant difference between BIC in reconstructed and recipient bone (p < 0.0001). Further, no statistically significant difference was found between the different treatment modalities which we found was due to large variation and subsequently due to low power. Comparing histology and SR micro-CT evaluation a bias of 5.2% was found in reconstructed area, and 15.3% in recipient bone. We conclude that for evaluation of BIC with histology and SR micro-CT, SR micro-CT cannot be proven more precise than histology for evaluation of BIC, however, with this SR micro-CT method, one histologic bone section is comparable to the 3D evaluation. Further, the two methods complement each other with knowledge on BIC in 2D and 3D. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  14. Experimental Studies of Low-Pressure Turbine Flows and Flow Control

    NASA Technical Reports Server (NTRS)

    Volino, Ralph J.

    2012-01-01

    This report summarizes research performed in support of the NASA Glenn Research Center (GRC) Low-Pressure Turbine (LPT) Flow Physics Program. The work was performed experimentally at the U.S. Naval Academy faculties. The geometry corresponded to "Pak B" LPT airfoil. The test section simulated LPT flow in a passage. Three experimental studies were performed: (a) Boundary layer measurements for ten baseline cases under high and low freestream turbulence conditions at five Reynolds numbers of 25,000, 50,000, 100,000, 200,000, and 300,000, based on passage exit velocity and suction surface wetted length; (b) Passive flow control studies with three thicknesses of two-dimensional bars, and two heights of three-dimensional circular cylinders with different spanwise separations, at same flow conditions as the 10 baseline cases; (c) Active flow control with oscillating synthetic (zero net mass flow) vortex generator jets, for one case with low freestream turbulence and a low Reynolds number of 25,000. The Passive flow control was successful at controlling the separation problem at low Reynolds numbers, with varying degrees of success from case to case and varying levels of impact at higher Reynolds numbers. The active flow control successfully eliminated the large separation problem for the low Reynolds number case. Very detailed data was acquired using hot-wire anemometry, including single and two velocity components, integral boundary layer quantities, turbulence statistics and spectra, turbulent shear stresses and their spectra, and intermittency, documenting transition, separation and reattachment. Models were constructed to correlate the results. The report includes a summary of the work performed and reprints of the publications describing the various studies.

  15. Hyperparameterization of soil moisture statistical models for North America with Ensemble Learning Models (Elm)

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.

    2017-12-01

    Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.

  16. Normal forms for reduced stochastic climate models

    PubMed Central

    Majda, Andrew J.; Franzke, Christian; Crommelin, Daan

    2009-01-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943

  17. Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke

    Treesearch

    G L Achtemeier; S L Goodrick; Y Liu; F Garcia-Menendez; Y Hu; M. Odman

    2011-01-01

    We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric...

  18. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  19. Flow Analysis of a Gas Turbine Low- Pressure Subsystem

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.

  20. Scattering and transport statistics at the metal-insulator transition: A numerical study of the power-law banded random-matrix model

    NASA Astrophysics Data System (ADS)

    Méndez-Bermúdez, J. A.; Gopar, Victor A.; Varga, Imre

    2010-09-01

    We study numerically scattering and transport statistical properties of the one-dimensional Anderson model at the metal-insulator transition described by the power-law banded random matrix (PBRM) model at criticality. Within a scattering approach to electronic transport, we concentrate on the case of a small number of single-channel attached leads. We observe a smooth crossover from localized to delocalized behavior in the average-scattering matrix elements, the conductance probability distribution, the variance of the conductance, and the shot noise power by varying b (the effective bandwidth of the PBRM model) from small (b≪1) to large (b>1) values. We contrast our results with analytic random matrix theory predictions which are expected to be recovered in the limit b→∞ . We also compare our results for the PBRM model with those for the three-dimensional (3D) Anderson model at criticality, finding that the PBRM model with bɛ[0.2,0.4] reproduces well the scattering and transport properties of the 3D Anderson model.

  1. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping

    2015-09-15

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less

  2. International epidemiology of child and adolescent psychopathology ii: integration and applications of dimensional findings from 44 societies.

    PubMed

    Rescorla, Leslie; Ivanova, Masha Y; Achenbach, Thomas M; Begovac, Ivan; Chahed, Myriam; Drugli, May Britt; Emerich, Deisy Ribas; Fung, Daniel S S; Haider, Mariam; Hansson, Kjell; Hewitt, Nohelia; Jaimes, Stefanny; Larsson, Bo; Maggiolini, Alfio; Marković, Jasminka; Mitrović, Dragan; Moreira, Paulo; Oliveira, João Tiago; Olsson, Martin; Ooi, Yoon Phaik; Petot, Djaouida; Pisa, Cecilia; Pomalima, Rolando; da Rocha, Marina Monzani; Rudan, Vlasta; Sekulić, Slobodan; Shahini, Mimoza; de Mattos Silvares, Edwiges Ferreira; Szirovicza, Lajos; Valverde, José; Vera, Luis Anderssen; Villa, Maria Clara; Viola, Laura; Woo, Bernardine S C; Zhang, Eugene Yuqing

    2012-12-01

    To build on Achenbach, Rescorla, and Ivanova (2012) by (a) reporting new international findings for parent, teacher, and self-ratings on the Child Behavior Checklist, Youth Self-Report, and Teacher's Report Form; (b) testing the fit of syndrome models to new data from 17 societies, including previously underrepresented regions; (c) testing effects of society, gender, and age in 44 societies by integrating new and previous data; (d) testing cross-society correlations between mean item ratings; (e) describing the construction of multisociety norms; (f) illustrating clinical applications. Confirmatory factor analyses (CFAs) of parent, teacher, and self-ratings, performed separately for each society; tests of societal, gender, and age effects on dimensional syndrome scales, DSM-oriented scales, Internalizing, Externalizing, and Total Problems scales; tests of agreement between low, medium, and high ratings of problem items across societies. CFAs supported the tested syndrome models in all societies according to the primary fit index (Root Mean Square Error of Approximation [RMSEA]), but less consistently according to other indices; effect sizes were small-to-medium for societal differences in scale scores, but very small for gender, age, and interactions with society; items received similarly low, medium, or high ratings in different societies; problem scores from 44 societies fit three sets of multisociety norms. Statistically derived syndrome models fit parent, teacher, and self-ratings when tested individually in all 44 societies according to RMSEAs (but less consistently according to other indices). Small to medium differences in scale scores among societies supported the use of low-, medium-, and high-scoring norms in clinical assessment of individual children. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  3. Relationship between Service Quality, Satisfaction, Motivation and Loyalty: A Multi-Dimensional Perspective

    ERIC Educational Resources Information Center

    Subrahmanyam, Annamdevula

    2017-01-01

    Purpose: This paper aims to identify and test four competing models with the interrelationships between students' perceived service quality, students' satisfaction, loyalty and motivation using structural equation modeling (SEM), and to select the best model using chi-square difference (??2) statistic test. Design/methodology/approach: The study…

  4. LSAT Dimensionality Analysis for the December 1991, June 1992, and October 1992 Administrations. Statistical Report. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Douglas, Jeff; Kim, Hae-Rim; Roussos, Louis; Stout, William; Zhang, Jinming

    An extensive nonparametric dimensionality analysis of latent structure was conducted on three forms of the Law School Admission Test (LSAT) (December 1991, June 1992, and October 1992) using the DIMTEST model in confirmatory analyses and using DIMTEST, FAC, DETECT, HCA, PROX, and a genetic algorithm in exploratory analyses. Results indicate that…

  5. Entanglement Entropy of the Six-Dimensional Horowitz-Strominger Black Hole

    NASA Astrophysics Data System (ADS)

    Li, Huai-Fan; Zhang, Sheng-Li; Wu, Yue-Qin; Ren, Zhao

    By using the entanglement entropy method, the statistical entropy of the Bose and Fermi fields in a thin film is calculated and the Bekenstein-Hawking entropy of six-dimensional Horowitz-Strominger black hole is obtained. Here, the Bose and Fermi fields are entangled with the quantum states in six-dimensional Horowitz-Strominger black hole and the fields are outside of the horizon. The divergence of brick-wall model is avoided without any cutoff by the new equation of state density obtained with the generalized uncertainty principle. The calculation implies that the high density quantum states near the event horizon are strongly correlated with the quantum states in black hole. The black hole entropy is a quantum effect. It is an intrinsic characteristic of space-time. The ultraviolet cutoff in the brick-wall model is unreasonable. The generalized uncertainty principle should be considered in the high energy quantum field near the event horizon. Using the quantum statistical method, we directly calculate the partition function of the Bose and Fermi fields under the background of the six-dimensional black hole. The difficulty in solving the wave equations of various particles is overcome.

  6. Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism

    NASA Astrophysics Data System (ADS)

    Parish, Eric; Duraisamy, Karthk

    2017-11-01

    The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  7. Uncovering low dimensional macroscopic chaotic dynamics of large finite size complex systems

    NASA Astrophysics Data System (ADS)

    Skardal, Per Sebastian; Restrepo, Juan G.; Ott, Edward

    2017-08-01

    In the last decade, it has been shown that a large class of phase oscillator models admit low dimensional descriptions for the macroscopic system dynamics in the limit of an infinite number N of oscillators. The question of whether the macroscopic dynamics of other similar systems also have a low dimensional description in the infinite N limit has, however, remained elusive. In this paper, we show how techniques originally designed to analyze noisy experimental chaotic time series can be used to identify effective low dimensional macroscopic descriptions from simulations with a finite number of elements. We illustrate and verify the effectiveness of our approach by applying it to the dynamics of an ensemble of globally coupled Landau-Stuart oscillators for which we demonstrate low dimensional macroscopic chaotic behavior with an effective 4-dimensional description. By using this description, we show that one can calculate dynamical invariants such as Lyapunov exponents and attractor dimensions. One could also use the reconstruction to generate short-term predictions of the macroscopic dynamics.

  8. Do GCM's predict the climate.... Or the low frequency weather?

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; Schertzer, D.; Varon, D.

    2012-04-01

    Over twenty-five years ago, a three-regime scaling model was proposed describing the statistical variability of the atmosphere over time scales ranging from weather scales out to ≈ 100 kyrs. Using modern in situ data reanalyses, monthly surface series (at 5ox5o), 8 "multiproxy" (yearly) series of the Northern hemisphere from 1500 - 1980, and GRIP and Vostok paleotemperatures at 5.2 and ≈ 100 year resolutions (over the past 91-420 kyrs), we refine the model and show how it can be understood with the help of new developments in nonlinear dynamics, especially multifractals and cascades. In a scaling range, mean fluctuations in state variables such as temperature ΔT vary in power law manners ≈ Δt**H the where Δt is the duration. At small (weather) scales the fluctuation exponents are generally H>0; they grow with scale (Δt). At longer scales Δt >τw (≈ 10 days) H changes sign, the fluctuations decrease with scale; this is the low variability, "low frequency weather" regime. In this regime, the spectrum is a relatively flat "plateau", it's variability is low, stable, corresponding to our usual idea of "long term weather statistics". Finally for longer times, Δt>τc ≈ 10 - 100 years, once again H>0, so that the variability increases with scale: the true climate regime. These scaling regimes allow us to objectively define the weather as fluctuations over periods <τw, to define "climate states" as fluctuations at scale τc and then "climate change" as the fluctuations at longer periods (Δt>τc). We show that the intermediate low frequency weather regime is the result of the weather regime undergoing a "dimensional transition": at temporal scales longer than the typical lifetime of planetary structures (τw), the spatial degrees of freedom are rapidly quenched so that only the temporal degrees of freedom are important. This low frequency weather regime has statistical properties well reproduced not only by stochastic cascade models of weather, but also by control runs (i.e. without climate forcing) of GCM based climate forecasting systems including those of the Institut Pierre Simon Laplace (Paris) and the Earth Forecasting System (Hamburg). In order for these systems to go beyond simply predicting low frequency weather i.e. in order for them to predict the climate, they need appropriate climate forcings and/ or new internal mechanisms of variability. Using statistical scaling techniques we examine the scale dependence of fluctuations from forced and unforced GCM outputs, including from the ECHO-G and EFS simulations in the Millenium climate reconstruction project and compare this with data, multiproxies and paleo data. Our general conclusion is that the models systematically underestimate the multidecadal, multicentennial scale variability.

  9. National Centers for Environmental Prediction

    Science.gov Websites

    Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar The Mesoscale Modeling Branch conducts a program of research and development in support of the prediction. This research and development includes mesoscale four-dimensional data assimilation of domestic

  10. Quantum Monte Carlo study of the transverse-field quantum Ising model on infinite-dimensional structures

    NASA Astrophysics Data System (ADS)

    Baek, Seung Ki; Um, Jaegon; Yi, Su Do; Kim, Beom Jun

    2011-11-01

    In a number of classical statistical-physical models, there exists a characteristic dimensionality called the upper critical dimension above which one observes the mean-field critical behavior. Instead of constructing high-dimensional lattices, however, one can also consider infinite-dimensional structures, and the question is whether this mean-field character extends to quantum-mechanical cases as well. We therefore investigate the transverse-field quantum Ising model on the globally coupled network and on the Watts-Strogatz small-world network by means of quantum Monte Carlo simulations and the finite-size scaling analysis. We confirm that both of the structures exhibit critical behavior consistent with the mean-field description. In particular, we show that the existing cumulant method has difficulty in estimating the correct dynamic critical exponent and suggest that an order parameter based on the quantum-mechanical expectation value can be a practically useful numerical observable to determine critical behavior when there is no well-defined dimensionality.

  11. Effects of three-dimensional velocity structure on the seismicity of the 1984 Morgan Hill, California, aftershock sequence

    USGS Publications Warehouse

    Michael, A.J.

    1988-01-01

    A three-dimensional velocity model for the area surrounding the 24 April 1984 Morgan Hill earthquake has been developed by simultaneously inverting local earthquake and refraction arrival-time data. This velocity model corresponds well to the surface geology of the region, predominantly showing a low-velocity region associated with the sedimentary sequence to the south-west of the Madrone Springs fault. The focal mechanisms were also determined for 946 earthquakes using both the one-dimensional and three-dimensional earth models. Both earth models yield similar focal mechanisms for these earthquakes. -from Author

  12. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  13. Brane-World Gravity.

    PubMed

    Maartens, Roy; Koyama, Kazuya

    2010-01-01

    The observable universe could be a 1+3-surface (the "brane") embedded in a 1+3+ d -dimensional spacetime (the "bulk"), with Standard Model particles and fields trapped on the brane while gravity is free to access the bulk. At least one of the d extra spatial dimensions could be very large relative to the Planck scale, which lowers the fundamental gravity scale, possibly even down to the electroweak (∼ TeV) level. This revolutionary picture arises in the framework of recent developments in M theory. The 1+10-dimensional M theory encompasses the known 1+9-dimensional superstring theories, and is widely considered to be a promising potential route to quantum gravity. At low energies, gravity is localized at the brane and general relativity is recovered, but at high energies gravity "leaks" into the bulk, behaving in a truly higher-dimensional way. This introduces significant changes to gravitational dynamics and perturbations, with interesting and potentially testable implications for high-energy astrophysics, black holes, and cosmology. Brane-world models offer a phenomenological way to test some of the novel predictions and corrections to general relativity that are implied by M theory. This review analyzes the geometry, dynamics and perturbations of simple brane-world models for cosmology and astrophysics, mainly focusing on warped 5-dimensional brane-worlds based on the Randall-Sundrum models. We also cover the simplest brane-world models in which 4-dimensional gravity on the brane is modified at low energies - the 5-dimensional Dvali-Gabadadze-Porrati models. Then we discuss co-dimension two branes in 6-dimensional models.

  14. Surprises in low dimensional spin 1/2 magnets - from crystal chemistry to microscopic magnetic models of complex oxides

    NASA Astrophysics Data System (ADS)

    Rosner, Helge

    2011-03-01

    A microscopic understanding of the structure-properties relation in crystalline materials is a main goal of modern solid state chemistry and physics. Due to their peculiar magnetism, low dimensional spin 1/2 systems are often highly sensitive to structural details. Seemingly unimportant structural details can be crucial for the magnetic ground state of a compound, especially in the case of competing interactions, frustration and near-degeneracy. Here, we present for selected, complex Cu 2+ systems that a first principles based approach can reliably provide the correct magnetic model, especially in cases where the interpretation of experimental data meets serious difficulties or fails. We demonstrate that the magnetism of low dimensional insulators crucially depends on the magnetically active orbitals which are determined by details of the ligand field of the magnetic cation. Our theoretical results are in very good agreement with thermodynamic and spectroscopic data and provide deep microscopic insight into topical low dimensional magnets.

  15. Estimating the expected value of partial perfect information in health economic evaluations using integrated nested Laplace approximation.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2016-10-15

    The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  16. Three-Dimensional City Determinants of the Urban Heat Island: A Statistical Approach

    NASA Astrophysics Data System (ADS)

    Chun, Bum Seok

    There is no doubt that the Urban Heat Island (UHI) is a mounting problem in built-up environments, due to the energy retention by the surface materials of dense buildings, leading to increased temperatures, air pollution, and energy consumption. Much of the earlier research on the UHI has used two-dimensional (2-D) information, such as land uses and the distribution of vegetation. In the case of homogeneous land uses, it is possible to predict surface temperatures with reasonable accuracy with 2-D information. However, three-dimensional (3-D) information is necessary to analyze more complex sites, including dense building clusters. Recent research on the UHI has started to consider multi-dimensional models. The purpose of this research is to explore the urban determinants of the UHI, using 2-D/3-D urban information with statistical modeling. The research includes the following stages: (a) estimating urban temperature, using satellite images, (b) developing a 3-D city model by LiDAR data, (c) generating geometric parameters with regard to 2-/3-D geospatial information, and (d) conducting different statistical analyses: OLS and spatial regressions. The research area is part of the City of Columbus, Ohio. To effectively and systematically analyze the UHI, hierarchical grid scales (480m, 240m, 120m, 60m, and 30m) are proposed, together with linear and the log-linear regression models. The non-linear OLS models with Log(AST) as dependent variable have the highest R2 among all the OLS-estimated models. However, both SAR and GSM models are estimated for the 480m, 240m, 120m, and 60m grids to reduce their spatial dependency. Most GSM models have R2s higher than 0.9, except for the 240m grid. Overall, the urban characteristics having high impacts in all grids are embodied in solar radiation, 3-D open space, greenery, and water streams. These results demonstrate that it is possible to mitigate the UHI, providing guidelines for policies aiming to reduce the UHI.

  17. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  18. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  20. Transport of cosmic-ray protons in intermittent heliospheric turbulence: Model and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alouani-Bibi, Fathallah; Le Roux, Jakobus A., E-mail: fb0006@uah.edu

    The transport of charged energetic particles in the presence of strong intermittent heliospheric turbulence is computationally analyzed based on known properties of the interplanetary magnetic field and solar wind plasma at 1 astronomical unit. The turbulence is assumed to be static, composite, and quasi-three-dimensional with a varying energy distribution between a one-dimensional Alfvénic (slab) and a structured two-dimensional component. The spatial fluctuations of the turbulent magnetic field are modeled either as homogeneous with a Gaussian probability distribution function (PDF), or as intermittent on large and small scales with a q-Gaussian PDF. Simulations showed that energetic particle diffusion coefficients both parallelmore » and perpendicular to the background magnetic field are significantly affected by intermittency in the turbulence. This effect is especially strong for parallel transport where for large-scale intermittency results show an extended phase of subdiffusive parallel transport during which cross-field transport diffusion dominates. The effects of intermittency are found to depend on particle rigidity and the fraction of slab energy in the turbulence, yielding a perpendicular to parallel mean free path ratio close to 1 for large-scale intermittency. Investigation of higher order transport moments (kurtosis) indicates that non-Gaussian statistical properties of the intermittent turbulent magnetic field are present in the parallel transport, especially for low rigidity particles at all times.« less

  1. Hybrid two-dimensional navigator correction: a new technique to suppress respiratory-induced physiological noise in multi-shot echo-planar functional MRI

    PubMed Central

    Barry, Robert L.; Klassen, L. Martyn; Williams, Joy M.; Menon, Ravi S.

    2008-01-01

    A troublesome source of physiological noise in functional magnetic resonance imaging (fMRI) is due to the spatio-temporal modulation of the magnetic field in the brain caused by normal subject respiration. fMRI data acquired using echo-planar imaging is very sensitive to these respiratory-induced frequency offsets, which cause significant geometric distortions in images. Because these effects increase with main magnetic field, they can nullify the gains in statistical power expected by the use of higher magnetic fields. As a study of existing navigator correction techniques for echo-planar fMRI has shown that further improvements can be made in the suppression of respiratory-induced physiological noise, a new hybrid two-dimensional (2D) navigator is proposed. Using a priori knowledge of the slow spatial variations of these induced frequency offsets, 2D field maps are constructed for each shot using spatial frequencies between ±0.5 cm−1 in k-space. For multi-shot fMRI experiments, we estimate that the improvement of hybrid 2D navigator correction over the best performance of one-dimensional navigator echo correction translates into a 15% increase in the volume of activation, 6% and 10% increases in the maximum and average t-statistics, respectively, for regions with high t-statistics, and 71% and 56% increases in the maximum and average t-statistics, respectively, in regions with low t-statistics due to contamination by residual physiological noise. PMID:18024159

  2. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zabaras, Nicolas J.

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  3. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  4. Chaos and simple determinism in reversed field pinch plasmas: Nonlinear analysis of numerical simulation and experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Christopher A.

    In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  5. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    PubMed

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  6. Effect of weak rotation on large-scale circulation cessations in turbulent convection.

    PubMed

    Assaf, Michael; Angheluta, Luiza; Goldenfeld, Nigel

    2012-08-17

    We investigate the effect of weak rotation on the large-scale circulation (LSC) of turbulent Rayleigh-Bénard convection, using the theory for cessations in a low-dimensional stochastic model of the flow previously studied. We determine the cessation frequency of the LSC as a function of rotation, and calculate the statistics of the amplitude and azimuthal velocity fluctuations of the LSC as a function of the rotation rate for different Rayleigh numbers. Furthermore, we show that the tails of the reorientation PDF remain unchanged for rotating systems, while the distribution of the LSC amplitude and correspondingly the cessation frequency are strongly affected by rotation. Our results are in close agreement with experimental observations.

  7. Role of upper-level wind shear on the structure and maintenance of derecho-producing convective systems

    NASA Astrophysics Data System (ADS)

    Coniglio, Michael Charles

    Common large-scale environments associated with the development of derecho-producing convective systems from a large number of events are identified using statistical clustering of the 500-mb geopotential heights as guidance. The majority of the events (72%) fall into three main patterns that include a well-defined upstream trough (40%), a ridge (20%), and a zonal, low-amplitude flow (12%), which is defined as an additional warm-season pattern that is not identified in past studies of derecho environments. Through an analysis of proximity soundings, discrepancies are found in both low-level and deep-tropospheric shear parameters between observations and the shear profiles considered favorable for strong, long-lived convective systems in idealized simulations. To explore the role of upper-level shear in derecho environments, a set of two-dimensional simulations of density currents within a dry, neutrally stable environment are used to examine the ability of a cold pool to lift environmental air within a vertically sheared flow. The results confirm that the addition of upper-level shear to a wind profile with weak to moderate low-level shear increases the vertical displacement of low-level parcels despite a decrease in the vertical velocity along the cold pool interface, as suggested by previous studies. Parcels that are elevated above the surface (1-2 km) overturn and are responsible for the deep lifting in the deep-shear environments. This deep overturning caused by the upper-level shear helps to maintain the tilt of the convective systems in more complex two-dimensional and three dimensional simulations. The overturning also is shown to greatly increase the size of the convective systems in the three-dimensional simulations by facilitating the initiation and maintenance of convective cells along the cold pool. When combined with estimates of the cold pool motion and the storm-relative hodograph, these results may best be used for the prediction of the demise of strong, linear mesoscale convective systems (MCSs) and may provide a conceptual model for the persistence of strong MCSs above a surface nocturnal inversion in situations that are not forced by a low-level jet.

  8. The influence of vegetation height heterogeneity on forest and woodland bird species richness across the United States.

    PubMed

    Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J

    2014-01-01

    Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r(2) = ∼ 0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r(2) = ∼ 0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness.

  9. The Influence of Vegetation Height Heterogeneity on Forest and Woodland Bird Species Richness across the United States

    PubMed Central

    Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J.

    2014-01-01

    Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r2 = ∼0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r2 = ∼0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. PMID:25101782

  10. One-dimensional statistical parametric mapping in Python.

    PubMed

    Pataky, Todd C

    2012-01-01

    Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.

  11. A three-dimensional refractive index model for simulation of optical wave propagation in atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Paramonov, P. V.; Vorontsov, A. M.; Kunitsyn, V. E.

    2015-10-01

    Numerical modeling of optical wave propagation in atmospheric turbulence is traditionally performed with using the so-called "split"-operator method, when the influence of the propagation medium's refractive index inhomogeneities is accounted for only within a system of infinitely narrow layers (phase screens) where phase is distorted. Commonly, under certain assumptions, such phase screens are considered as mutually statistically uncorrelated. However, in several important applications including laser target tracking, remote sensing, and atmospheric imaging, accurate optical field propagation modeling assumes upper limitations on interscreen spacing. The latter situation can be observed, for instance, in the presence of large-scale turbulent inhomogeneities or in deep turbulence conditions, where interscreen distances become comparable with turbulence outer scale and, hence, corresponding phase screens cannot be statistically uncorrelated. In this paper, we discuss correlated phase screens. The statistical characteristics of screens are calculated based on a representation of turbulent fluctuations of three-dimensional (3D) refractive index random field as a set of sequentially correlated 3D layers displaced in the wave propagation direction. The statistical characteristics of refractive index fluctuations are described in terms of the von Karman power spectrum density. In the representation of these 3D layers by corresponding phase screens, the geometrical optics approximation is used.

  12. Experimental Studies of Low-Pressure Turbine Flows and Flow Control. Streamwise Pressure Profiles and Velocity Profiles

    NASA Technical Reports Server (NTRS)

    Volino, Ralph

    2012-01-01

    This report summarizes research performed in support of the NASA Glenn Research Center (GRC) Low-Pressure Turbine (LPT) Flow Physics Program. The work was performed experimentally at the U.S. Naval Academy faculties. The geometry corresponded to "Pak B" LPT airfoil. The test section simulated LPT flow in a passage. Three experimental studies were performed: (a) Boundary layer measurements for ten baseline cases under high and low freestream turbulence conditions at five Reynolds numbers of 25,000, 50,000, 100,000, 200,000, and 300,000, based on passage exit velocity and suction surface wetted length; (b) Passive flow control studies with three thicknesses of two-dimensional bars, and two heights of three-dimensional circular cylinders with different spanwise separations, at same flow conditions as the 10 baseline cases; (c) Active flow control with oscillating synthetic (zero net mass flow) vortex generator jets, for one case with low freestream turbulence and a low Reynolds number of 25,000. The Passive flow control was successful at controlling the separation problem at low Reynolds numbers, with varying degrees of success from case to case and varying levels of impact at higher Reynolds numbers. The active flow control successfully eliminated the large separation problem for the low Reynolds number case. Very detailed data was acquired using hot-wire anemometry, including single and two velocity components, integral boundary layer quantities, turbulence statistics and spectra, turbulent shear stresses and their spectra, and intermittency, documenting transition, separation and reattachment. Models were constructed to correlate the results. The report includes a summary of the work performed and reprints of the publications describing the various studies.This report summarizes research performed in support of the NASA Glenn Research Center (GRC) Low-Pressure Turbine (LPT) Flow Physics Program. The work was performed experimentally at the U.S. Naval Academy faculties. The geometry corresponded to "Pak B" LPT airfoil. The test section simulated LPT flow in a passage. Three experimental studies were performed: (a) Boundary layer measurements for ten baseline cases under high and low freestream turbulence conditions at five Reynolds numbers of 25,000, 50,000, 100,000, 200,000, and 300,000, based on passage exit velocity and suction surface wetted length; (b) Passive flow control studies with three thicknesses of two-dimensional bars, and two heights of three-dimensional circular cylinders with different spanwise separations, at same flow conditions as the 10 baseline cases; (c) Active flow control with oscillating synthetic (zero net mass flow) vortex generator jets, for one case with low freestream turbulence and a low Reynolds number of 25,000. The Passive flow control was successful at controlling the separation problem at low Reynolds numbers, with varying degrees of success from case to case and varying levels of impact at higher Reynolds numbers. The active flow control successfully eliminated the large separation problem for the low Reynolds number case. Very detailed data was acquired using hot-wire anemometry, including single and two velocity components, integral boundary layer quantities, turbulence statistics and spectra, turbulent shear stresses and their spectra, and intermittency, documenting transition, separation and reattachment. Models were constructed to correlate the results. The report includes a summary of the work performed and reprints of the publications describing the various studies. The folders in this supplement contain processed data in ASCII format. Streamwise pressure profiles and velocity profiles are included. The velocity profiles were acquired using single sensor and cross sensor hot-wire probes which were traversed from the wall to the freestream at various streamwise locations. In some of the flow control cases (3D Trips and Jets) profiles were acquired at multiple spanwise locations.

  13. A generalized K statistic for estimating phylogenetic signal from shape and other high-dimensional multivariate data.

    PubMed

    Adams, Dean C

    2014-09-01

    Phylogenetic signal is the tendency for closely related species to display similar trait values due to their common ancestry. Several methods have been developed for quantifying phylogenetic signal in univariate traits and for sets of traits treated simultaneously, and the statistical properties of these approaches have been extensively studied. However, methods for assessing phylogenetic signal in high-dimensional multivariate traits like shape are less well developed, and their statistical performance is not well characterized. In this article, I describe a generalization of the K statistic of Blomberg et al. that is useful for quantifying and evaluating phylogenetic signal in highly dimensional multivariate data. The method (K(mult)) is found from the equivalency between statistical methods based on covariance matrices and those based on distance matrices. Using computer simulations based on Brownian motion, I demonstrate that the expected value of K(mult) remains at 1.0 as trait variation among species is increased or decreased, and as the number of trait dimensions is increased. By contrast, estimates of phylogenetic signal found with a squared-change parsimony procedure for multivariate data change with increasing trait variation among species and with increasing numbers of trait dimensions, confounding biological interpretations. I also evaluate the statistical performance of hypothesis testing procedures based on K(mult) and find that the method displays appropriate Type I error and high statistical power for detecting phylogenetic signal in high-dimensional data. Statistical properties of K(mult) were consistent for simulations using bifurcating and random phylogenies, for simulations using different numbers of species, for simulations that varied the number of trait dimensions, and for different underlying models of trait covariance structure. Overall these findings demonstrate that K(mult) provides a useful means of evaluating phylogenetic signal in high-dimensional multivariate traits. Finally, I illustrate the utility of the new approach by evaluating the strength of phylogenetic signal for head shape in a lineage of Plethodon salamanders. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Geometrical structure of Neural Networks: Geodesics, Jeffrey's Prior and Hyper-ribbons

    NASA Astrophysics Data System (ADS)

    Hayden, Lorien; Alemi, Alex; Sethna, James

    2014-03-01

    Neural networks are learning algorithms which are employed in a host of Machine Learning problems including speech recognition, object classification and data mining. In practice, neural networks learn a low dimensional representation of high dimensional data and define a model manifold which is an embedding of this low dimensional structure in the higher dimensional space. In this work, we explore the geometrical structure of a neural network model manifold. A Stacked Denoising Autoencoder and a Deep Belief Network are trained on handwritten digits from the MNIST database. Construction of geodesics along the surface and of slices taken from the high dimensional manifolds reveal a hierarchy of widths corresponding to a hyper-ribbon structure. This property indicates that neural networks fall into the class of sloppy models, in which certain parameter combinations dominate the behavior. Employing this information could prove valuable in designing both neural network architectures and training algorithms. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153.

  15. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  16. REASSESSING MECHANISM AS A PREDICTOR OF PEDIATRIC INJURY MORTALITY

    PubMed Central

    Beck, Haley; Mittal, Sushil; Madigan, David; Burd, Randall S.

    2015-01-01

    Background The use of mechanism of injury as a predictor of injury outcome presents practical challenges because this variable may be missing or inaccurate in many databases. The purpose of this study was to determine the importance of mechanism of injury as a predictor of mortality among injured children. Methods The records of children (<15 years old) sustaining a blunt injury were obtained from the National Trauma Data Bank. Models predicting injury mortality were developed using mechanism of injury and injury coding using either Abbreviated Injury Scale post-dot values (low-dimensional injury coding) or injury ICD-9 codes and their two-way interactions (high-dimensional injury coding). Model performance with and without inclusion of mechanism of injury was compared for both coding schemes, and the relative importance of mechanism of injury as a variable in each model type was evaluated. Results Among 62,569 records, a mortality rate of 0.9% was observed. Inclusion of mechanism of injury improved model performance when using low-dimensional injury coding but was associated with no improvement when using high-dimensional injury coding. Mechanism of injury contributed to 28% of model variance when using low-dimensional injury coding and <1% when high-dimensional injury coding was used. Conclusions Although mechanism of injury may be an important predictor of injury mortality among children sustaining blunt trauma, its importance as a predictor of mortality depends on approach used for injury coding. Mechanism of injury is not an essential predictor of outcome after injury when coding schemes are used that better characterize injuries sustained after blunt pediatric trauma. PMID:26197948

  17. A comparison of two- and three-dimensional stochastic models of regional solute movement

    USGS Publications Warehouse

    Shapiro, A.M.; Cvetkovic, V.D.

    1990-01-01

    Recent models of solute movement in porous media that are based on a stochastic description of the porous medium properties have been dedicated primarily to a three-dimensional interpretation of solute movement. In many practical problems, however, it is more convenient and consistent with measuring techniques to consider flow and solute transport as an areal, two-dimensional phenomenon. The physics of solute movement, however, is dependent on the three-dimensional heterogeneity in the formation. A comparison of two- and three-dimensional stochastic interpretations of solute movement in a porous medium having a statistically isotropic hydraulic conductivity field is investigated. To provide an equitable comparison between the two- and three-dimensional analyses, the stochastic properties of the transmissivity are defined in terms of the stochastic properties of the hydraulic conductivity. The variance of the transmissivity is shown to be significantly reduced in comparison to that of the hydraulic conductivity, and the transmissivity is spatially correlated over larger distances. These factors influence the two-dimensional interpretations of solute movement by underestimating the longitudinal and transverse growth of the solute plume in comparison to its description as a three-dimensional phenomenon. Although this analysis is based on small perturbation approximations and the special case of a statistically isotropic hydraulic conductivity field, it casts doubt on the use of a stochastic interpretation of the transmissivity in describing regional scale movement. However, by assuming the transmissivity to be the vertical integration of the hydraulic conductivity field at a given position, the stochastic properties of the hydraulic conductivity can be estimated from the stochastic properties of the transmissivity and applied to obtain a more accurate interpretation of solute movement. ?? 1990 Kluwer Academic Publishers.

  18. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  19. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  20. The use of kernel local Fisher discriminant analysis for the channelization of the Hotelling model observer

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.

    2015-03-01

    It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.

  1. Diffusion in higher dimensional SYK model with complex fermions

    NASA Astrophysics Data System (ADS)

    Cai, Wenhe; Ge, Xian-Hui; Yang, Guo-Hong

    2018-01-01

    We construct a new higher dimensional SYK model with complex fermions on bipartite lattices. As an extension of the original zero-dimensional SYK model, we focus on the one-dimension case, and similar Hamiltonian can be obtained in higher dimensions. This model has a conserved U(1) fermion number Q and a conjugate chemical potential μ. We evaluate the thermal and charge diffusion constants via large q expansion at low temperature limit. The results show that the diffusivity depends on the ratio of free Majorana fermions to Majorana fermions with SYK interactions. The transport properties and the butterfly velocity are accordingly calculated at low temperature. The specific heat and the thermal conductivity are proportional to the temperature. The electrical resistivity also has a linear temperature dependence term.

  2. Two-Dimensional Versus Three-Dimensional Conceptualization in Astronomy Education

    NASA Astrophysics Data System (ADS)

    Reynolds, Michael David

    Numerous science conceptual issues are naturally three-dimensional. Classroom presentations are often two -dimensional or at best multidimensional. Several astronomy topics are of this nature, e. g. mechanics of the phases of the moon. Textbooks present this three-dimensional topic in two-dimensions; such is often the case in the classroom. This study was conducted to examine conceptions exhibited by pairs of like-sex 11th grade standard physics students as they modeled the lunar phases. Student pairs, 13 male and 13 female, were randomly selected and assigned. Pairing comes closer to classroom emulation, minimizes needs for direct probes, and pair discussion is more likely to display variety and depth. Four hypotheses were addressed: (1) Participants who model three-dimensionally will more likely achieve a higher explanation score. (2) Students who experienced more earth or physical science exposure will more likely model three-dimensionally. (3) Pairs that exhibit a strong science or mathematics preference will more likely model three-dimensionally. (4) Males will model in three dimensions more than females. Students provided background information, including science course exposure and subject preference. Each pair laid out a 16-card set representing two complete lunar phase changes. The pair was asked to explain why the phases occur. Materials were provided for use, including disks, spheres, paper and pen, and flashlight. Activities were videotaped for later evaluation. Statistics of choice was a correlation determination between course preference and model type and ANOVA for the other hypotheses. It was determined that pairs who modeled three -dimensionally achieved a higher score on their phases mechanics explanation at p <.05 level. Pairs with earth science or physical science exposure, those who prefer science or mathematics, and male participants were not more likely to model three-dimensionally. Possible reasons for lack of significance was small sample size and in the case of course preferences, small differences in course preference means. Based on this study, instructors should be aware of dimensionality and student misconceptions. Whenever possible, three-dimensional concepts should be modeled as such. Authors and publishers should consider modeling suggestions and three-dimensional ancillaries.

  3. Analytical model for three-dimensional Mercedes-Benz water molecules.

    PubMed

    Urbic, T

    2012-06-01

    We developed a statistical model which describes the thermal and volumetric properties of water-like molecules. A molecule is presented as a three-dimensional sphere with four hydrogen-bonding arms. Each water molecule interacts with its neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of a model developed before for a two-dimensional Mercedes-Benz model of water. We explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility as a function of temperature and pressure. We found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds upon increasing the temperature.

  4. Analytical model for three-dimensional Mercedes-Benz water molecules

    NASA Astrophysics Data System (ADS)

    Urbic, T.

    2012-06-01

    We developed a statistical model which describes the thermal and volumetric properties of water-like molecules. A molecule is presented as a three-dimensional sphere with four hydrogen-bonding arms. Each water molecule interacts with its neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of a model developed before for a two-dimensional Mercedes-Benz model of water. We explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility as a function of temperature and pressure. We found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds upon increasing the temperature.

  5. Analytical model for three-dimensional Mercedes-Benz water molecules

    PubMed Central

    Urbic, T.

    2013-01-01

    We developed a statistical model which describes the thermal and volumetric properties of water-like molecules. A molecule is presented as a three-dimensional sphere with four hydrogen-bonding arms. Each water molecule interacts with its neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of a model developed before for a two-dimensional Mercedes-Benz model of water. We explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility as a function of temperature and pressure. We found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds upon increasing the temperature. PMID:23005100

  6. Statistical and dynamical modeling of heavy-ion fusion-fission reactions

    NASA Astrophysics Data System (ADS)

    Eslamizadeh, H.; Razazzadeh, H.

    2018-02-01

    A modified statistical model and a four dimensional dynamical model based on Langevin equations have been used to simulate the fission process of the excited compound nuclei 207At and 216Ra produced in the fusion 19F + 188Os and 19F + 197Au reactions. The evaporation residue cross section, the fission cross section, the pre-scission neutron, proton and alpha multiplicities and the anisotropy of fission fragments angular distribution have been calculated for the excited compound nuclei 207At and 216Ra. In the modified statistical model the effects of spin K about the symmetry axis and temperature have been considered in calculations of the fission widths and the potential energy surfaces. It was shown that the modified statistical model can reproduce the above mentioned experimental data by using appropriate values of the temperature coefficient of the effective potential equal to λ = 0.0180 ± 0.0055, 0.0080 ± 0.0030 MeV-2 and the scaling factor of the fission barrier height equal to rs = 1.0015 ± 0.0025, 1.0040 ± 0.0020 for the compound nuclei 207At and 216Ra, respectively. Three collective shape coordinates plus the projection of total spin of the compound nucleus on the symmetry axis, K, were considered in the four dimensional dynamical model. In the dynamical calculations, dissipation was generated through the chaos weighted wall and window friction formula. Comparison of the theoretical results with the experimental data showed that two models make it possible to reproduce satisfactorily the above mentioned experimental data for the excited compound nuclei 207At and 216Ra.

  7. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  8. Fast dimension reduction and integrative clustering of multi-omics data using low-rank approximation: application to cancer molecular classification.

    PubMed

    Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin

    2015-12-01

    One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .

  9. An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Saether, E.; Glaessgen, E.H.; Yamakov, V.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  10. Chaos in plasma simulation and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, C.; Newman, D.E.; Sprott, J.C.

    1993-09-01

    We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  11. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    PubMed Central

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  12. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting.

    PubMed

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-02-17

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes.

  13. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions.

    PubMed

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  14. Flow Equation Approach to the Statistics of Nonlinear Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Marston, J. B.; Hastings, M. B.

    2005-03-01

    The probability distribution function of non-linear dynamical systems is governed by a linear framework that resembles quantum many-body theory, in which stochastic forcing and/or averaging over initial conditions play the role of non-zero . Besides the well-known Fokker-Planck approach, there is a related Hopf functional methodootnotetextUriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov (Cambridge University Press, 1995) chapter 9.5.; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we investigate the method of continuous unitary transformationsootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994). (also known as the flow equation approachootnotetextF. Wegner, Ann. Phys. 3, 77 (1994).), suitably generalized to the diagonalization of non-Hermitian matrices. Comparison to the more traditional cumulant expansion method is illustrated with low-dimensional attractors. The treatment of high-dimensional dynamical systems is also discussed.

  15. A new feedback image encryption scheme based on perturbation with dynamical compound chaotic sequence cipher generator

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Cui, Minggen; Wang, Zhu

    2009-07-01

    The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.

  16. Neural network modelling and dynamical system theory: are they relevant to study the governing dynamics of association football players?

    PubMed

    Dutt-Mazumder, Aviroop; Button, Chris; Robins, Anthony; Bartlett, Roger

    2011-12-01

    Recent studies have explored the organization of player movements in team sports using a range of statistical tools. However, the factors that best explain the performance of association football teams remain elusive. Arguably, this is due to the high-dimensional behavioural outputs that illustrate the complex, evolving configurations typical of team games. According to dynamical system analysts, movement patterns in team sports exhibit nonlinear self-organizing features. Nonlinear processing tools (i.e. Artificial Neural Networks; ANNs) are becoming increasingly popular to investigate the coordination of participants in sports competitions. ANNs are well suited to describing high-dimensional data sets with nonlinear attributes, however, limited information concerning the processes required to apply ANNs exists. This review investigates the relative value of various ANN learning approaches used in sports performance analysis of team sports focusing on potential applications for association football. Sixty-two research sources were summarized and reviewed from electronic literature search engines such as SPORTDiscus, Google Scholar, IEEE Xplore, Scirus, ScienceDirect and Elsevier. Typical ANN learning algorithms can be adapted to perform pattern recognition and pattern classification. Particularly, dimensionality reduction by a Kohonen feature map (KFM) can compress chaotic high-dimensional datasets into low-dimensional relevant information. Such information would be useful for developing effective training drills that should enhance self-organizing coordination among players. We conclude that ANN-based qualitative analysis is a promising approach to understand the dynamical attributes of association football players.

  17. Infinitely divisible cascades to model the statistics of natural images.

    PubMed

    Chainais, Pierre

    2007-12-01

    We propose to model the statistics of natural images thanks to the large class of stochastic processes called Infinitely Divisible Cascades (IDC). IDC were first introduced in one dimension to provide multifractal time series to model the so-called intermittency phenomenon in hydrodynamical turbulence. We have extended the definition of scalar infinitely divisible cascades from 1 to N dimensions and commented on the relevance of such a model in fully developed turbulence in [1]. In this article, we focus on the particular 2 dimensional case. IDC appear as good candidates to model the statistics of natural images. They share most of their usual properties and appear to be consistent with several independent theoretical and experimental approaches of the literature. We point out the interest of IDC for applications to procedural texture synthesis.

  18. Introduction to multivariate discrimination

    NASA Astrophysics Data System (ADS)

    Kégl, Balázs

    2013-07-01

    Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either relevant to or even motivated by certain unorthodox applications of multivariate discrimination in experimental physics.

  19. Thomas-Fermi model for a bulk self-gravitating stellar object in two dimensions

    NASA Astrophysics Data System (ADS)

    De, Sanchari; Chakrabarty, Somenath

    2015-09-01

    In this article we have solved a hypothetical problem related to the stability and gross properties of two-dimensional self-gravitating stellar objects using the Thomas-Fermi model. The formalism presented here is an extension of the standard three-dimensional problem discussed in the book on statistical physics, Part I by Landau and Lifshitz. Further, the formalism presented in this article may be considered a class problem for post-graduate-level students of physics or may be assigned as a part of their dissertation project.

  20. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  1. High-level intuitive features (HLIFs) for intuitive skin lesion description.

    PubMed

    Amelard, Robert; Glaister, Jeffrey; Wong, Alexander; Clausi, David A

    2015-03-01

    A set of high-level intuitive features (HLIFs) is proposed to quantitatively describe melanoma in standard camera images. Melanoma is the deadliest form of skin cancer. With rising incidence rates and subjectivity in current clinical detection methods, there is a need for melanoma decision support systems. Feature extraction is a critical step in melanoma decision support systems. Existing feature sets for analyzing standard camera images are comprised of low-level features, which exist in high-dimensional feature spaces and limit the system's ability to convey intuitive diagnostic rationale. The proposed HLIFs were designed to model the ABCD criteria commonly used by dermatologists such that each HLIF represents a human-observable characteristic. As such, intuitive diagnostic rationale can be conveyed to the user. Experimental results show that concatenating the proposed HLIFs with a full low-level feature set increased classification accuracy, and that HLIFs were able to separate the data better than low-level features with statistical significance. An example of a graphical interface for providing intuitive rationale is given.

  2. The influence of processor focus on speckle correlation statistics for a Shuttle imaging radar scene of Hurricane Josephine

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1988-01-01

    The surface wave field produced by Hurricane Josephine was imaged by the L-band SAR aboard the Challenger on October 12, 1984. Exponential trends found in the two-dimensional autocorrelations of speckled image data support an equilibrium theory model of sea surface hydrodynamics. The notions of correlated specular reflection, surface coherence, optimal Doppler parameterization and spatial resolution are discussed within the context of a Poisson-Rayleigh statistical model of the SAR imaging process.

  3. Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy

    NASA Astrophysics Data System (ADS)

    Gil, D.; Garcia-Barnes, J.; Hernández-Sabate, A.; Marti, E.

    2010-03-01

    Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.

  4. Low-Dimensional Model of a Cylinder Wake

    NASA Astrophysics Data System (ADS)

    Luchtenburg, Mark; Cohen, Kelly; Siegel, Stefan; McLaughlin, Tom

    2003-11-01

    In a two-dimensional cylinder wake, self-excited oscillations in the form of periodic shedding of vortices are observed above a critical Reynolds number of about 47. These flow-induced non-linear oscillations lead to some undesirable effects associated with unsteady pressures such as fluid-structure interactions. An effective way of suppressing the self-excited flow oscillations is by the incorporation of closed-loop flow control. In this effort, a low dimensional, proper orthogonal decomposition (POD) model is based on data obtained from direct numerical simulations of the Navier Stokes equations for the two dimensional circular cylinder wake at a Reynolds number of 100. Three different conditions are examined, namely, the unforced wake experiencing steady-state vortex shedding, the transient behavior of the unforced wake at the startup of the simulation, and transient response to open-loop harmonic forcing by translation. We discuss POD mode selection and the number of modes that need to be included in the low-dimensional model. It is found that the transient dynamics need to be represented by a coupled system that includes an aperiodic mean-flow mode, an aperiodic shift mode and the periodic von Karman modes. Finally, a least squares mapping method is introduced to develop the non-linear state equations. The predictive capability of the state equations demonstrates the ability of the above approach to model the transient dynamics of the wake.

  5. Extreme value statistics for two-dimensional convective penetration in a pre-main sequence star

    NASA Astrophysics Data System (ADS)

    Pratt, J.; Baraffe, I.; Goffrey, T.; Constantino, T.; Viallet, M.; Popov, M. V.; Walder, R.; Folini, D.

    2017-08-01

    Context. In the interior of stars, a convectively unstable zone typically borders a zone that is stable to convection. Convective motions can penetrate the boundary between these zones, creating a layer characterized by intermittent convective mixing, and gradual erosion of the density and temperature stratification. Aims: We examine a penetration layer formed between a central radiative zone and a large convection zone in the deep interior of a young low-mass star. Using the Multidimensional Stellar Implicit Code (MUSIC) to simulate two-dimensional compressible stellar convection in a spherical geometry over long times, we produce statistics that characterize the extent and impact of convective penetration in this layer. Methods: We apply extreme value theory to the maximal extent of convective penetration at any time. We compare statistical results from simulations which treat non-local convection, throughout a large portion of the stellar radius, with simulations designed to treat local convection in a small region surrounding the penetration layer. For each of these situations, we compare simulations of different resolution, which have different velocity magnitudes. We also compare statistical results between simulations that radiate energy at a constant rate to those that allow energy to radiate from the stellar surface according to the local surface temperature. Results: Based on the frequency and depth of penetrating convective structures, we observe two distinct layers that form between the convection zone and the stable radiative zone. We show that the probability density function of the maximal depth of convective penetration at any time corresponds closely in space with the radial position where internal waves are excited. We find that the maximal penetration depth can be modeled by a Weibull distribution with a small shape parameter. Using these results, and building on established scalings for diffusion enhanced by large-scale convective motions, we propose a new form for the diffusion coefficient that may be used for one-dimensional stellar evolution calculations in the large Péclet number regime. These results should contribute to the 321D link.

  6. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  7. Unperturbed Schelling Segregation in Two or Three Dimensions

    NASA Astrophysics Data System (ADS)

    Barmpalias, George; Elwes, Richard; Lewis-Pye, Andrew

    2016-09-01

    Schelling's models of segregation, first described in 1969 (Am Econ Rev 59:488-493, 1969) are among the best known models of self-organising behaviour. Their original purpose was to identify mechanisms of urban racial segregation. But his models form part of a family which arises in statistical mechanics, neural networks, social science, and beyond, where populations of agents interact on networks. Despite extensive study, unperturbed Schelling models have largely resisted rigorous analysis, prior results generally focusing on variants in which noise is introduced into the dynamics, the resulting system being amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory (Young in Individual strategy and social structure: an evolutionary theory of institutions, Princeton University Press, Princeton, 1998). A series of recent papers (Brandt et al. in: Proceedings of the 44th annual ACM symposium on theory of computing (STOC 2012), 2012); Barmpalias et al. in: 55th annual IEEE symposium on foundations of computer science, Philadelphia, 2014, J Stat Phys 158:806-852, 2015), has seen the first rigorous analyses of 1-dimensional unperturbed Schelling models, in an asymptotic framework largely unknown in statistical mechanics. Here we provide the first such analysis of 2- and 3-dimensional unperturbed models, establishing most of the phase diagram, and answering a challenge from Brandt et al. in: Proceedings of the 44th annual ACM symposium on theory of computing (STOC 2012), 2012).

  8. Sequence specificity, statistical potentials, and three-dimensional structure prediction with self-correcting distance geometry calculations of beta-sheet formation in proteins.

    PubMed Central

    Zhu, H.; Braun, W.

    1999-01-01

    A statistical analysis of a representative data set of 169 known protein structures was used to analyze the specificity of residue interactions between spatial neighboring strands in beta-sheets. Pairwise potentials were derived from the frequency of residue pairs in nearest contact, second nearest and third nearest contacts across neighboring beta-strands compared to the expected frequency of residue pairs in a random model. A pseudo-energy function based on these statistical pairwise potentials recognized native beta-sheets among possible alternative pairings. The native pairing was found within the three lowest energies in 73% of the cases in the training data set and in 63% of beta-sheets in a test data set of 67 proteins, which were not part of the training set. The energy function was also used to detect tripeptides, which occur frequently in beta-sheets of native proteins. The majority of native partners of tripeptides were distributed in a low energy range. Self-correcting distance geometry (SECODG) calculations using distance constraints sets derived from possible low energy pairing of beta-strands uniquely identified the native pairing of the beta-sheet in pancreatic trypsin inhibitor (BPTI). These results will be useful for predicting the structure of proteins from their amino acid sequence as well as for the design of proteins containing beta-sheets. PMID:10048326

  9. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  10. The consequences of two distinct reaction coordinates in the decomposition of the ethylamine cation conformers

    NASA Astrophysics Data System (ADS)

    Petersen, Allan C.; Sølling, Theis I.

    2018-06-01

    The ethylamine cation CH3CH2NH2+ is shown to loose CH3 accompanied by a bimodal kinetic energy release. CH3CH2NH2+ exists in two conformeric forms, which are computationally shown to be distinct minima. The barrier separating the conformers is modest compared to the energy requirement for dissociation, and the conformers are so easily interconverted that the reactions take place from a mixture of the two conformers in equilibrium. However, once a reaction begins it is conformer-specific. The reaction from one conformational origin takes place by simple cleavage along a one-dimensional reaction coordinate, whereas reaction from the other origin is by a complex reaction mechanism with a two- or possibly three-dimensional reaction coordinate. Reaction by the former mechanism is a statistical process associated with a low kinetic energy release (KER), while the latter is non-statistical giving rise to a very low KER. The experimental result is a composite signal due to the superposition of two simple Gaussians, each corresponding to their respective KER.

  11. Mapping the Classroom Emotional Environment

    ERIC Educational Resources Information Center

    Harvey, Shane T.; Bimler, David; Evans, Ian M.; Kirkland, John; Pechtel, Pia

    2012-01-01

    Harvey and Evans (2003) have proposed that teachers' emotional skills, as required in the classroom, can be organized into a five-dimensional model. Further research is necessary to validate this model and evaluate the importance of each dimension of teacher emotion competence for educational practice. Using a statistical method for mapping…

  12. Why environmental scientists are becoming Bayesians

    Treesearch

    James S. Clark

    2005-01-01

    Advances in computational statistics provide a general framework for the high dimensional models typically needed for ecological inference and prediction. Hierarchical Bayes (HB) represents a modelling structure with capacity to exploit diverse sources of information, to accommodate influences that are unknown (or unknowable), and to draw inference on large numbers of...

  13. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different casemore » studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.« less

  14. Pauli structures arising from confined particles interacting via a statistical potential

    NASA Astrophysics Data System (ADS)

    Batle, Josep; Ciftja, Orion; Farouk, Ahmed; Alkhambashi, Majid; Abdalla, Soliman

    2017-09-01

    There have been suggestions that the Pauli exclusion principle alone can lead a non-interacting (free) system of identical fermions to form crystalline structures dubbed Pauli crystals. Single-shot imaging experiments for the case of ultra-cold systems of free spin-polarized fermionic atoms in a two-dimensional harmonic trap appear to show geometric arrangements that cannot be characterized as Wigner crystals. This work explores this idea and considers a well-known approach that enables one to treat a quantum system of free fermions as a system of classical particles interacting with a statistical interaction potential. The model under consideration, though classical in nature, incorporates the quantum statistics by endowing the classical particles with an effective interaction potential. The reasonable expectation is that possible Pauli crystal features seen in experiments may manifest in this model that captures the correct quantum statistics as a first order correction. We use the Monte Carlo simulated annealing method to obtain the most stable configurations of finite two-dimensional systems of confined particles that interact with an appropriate statistical repulsion potential. We consider both an isotropic harmonic and a hard-wall confinement potential. Despite minor differences, the most stable configurations observed in our model correspond to the reported Pauli crystals in single-shot imaging experiments of free spin-polarized fermions in a harmonic trap. The crystalline configurations observed appear to be different from the expected classical Wigner crystal structures that would emerge should the confined classical particles had interacted with a pair-wise Coulomb repulsion.

  15. Exploring load, velocity, and surface disorder dependence of friction with one-dimensional and two-dimensional models.

    PubMed

    Dagdeviren, Omur E

    2018-08-03

    The effect of surface disorder, load, and velocity on friction between a single asperity contact and a model surface is explored with one-dimensional and two-dimensional Prandtl-Tomlinson (PT) models. We show that there are fundamental physical differences between the predictions of one-dimensional and two-dimensional models. The one-dimensional model estimates a monotonic increase in friction and energy dissipation with load, velocity, and surface disorder. However, a two-dimensional PT model, which is expected to approximate a tip-sample system more realistically, reveals a non-monotonic trend, i.e. friction is inert to surface disorder and roughness in wearless friction regime. The two-dimensional model discloses that the surface disorder starts to dominate the friction and energy dissipation when the tip and the sample interact predominantly deep into the repulsive regime. Our numerical calculations address that tracking the minimum energy path and the slip-stick motion are two competing effects that determine the load, velocity, and surface disorder dependence of friction. In the two-dimensional model, the single asperity can follow the minimum energy path in wearless regime; however, with increasing load and sliding velocity, the slip-stick movement dominates the dynamic motion and results in an increase in friction by impeding tracing the minimum energy path. Contrary to the two-dimensional model, when the one-dimensional PT model is employed, the single asperity cannot escape to the minimum energy minimum due to constraint motion and reveals only a trivial dependence of friction on load, velocity, and surface disorder. Our computational analyses clarify the physical differences between the predictions of the one-dimensional and two-dimensional models and open new avenues for disordered surfaces for low energy dissipation applications in wearless friction regime.

  16. A Multidimensional Partial Credit Model with Associated Item and Test Statistics: An Application to Mixed-Format Tests

    ERIC Educational Resources Information Center

    Yao, Lihua; Schwarz, Richard D.

    2006-01-01

    Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…

  17. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  18. Numerical study of low-frequency discharge oscillations in a 5 kW Hall thruster

    NASA Astrophysics Data System (ADS)

    Le, YANG; Tianping, ZHANG; Juanjuan, CHEN; Yanhui, JIA

    2018-07-01

    A two-dimensional particle-in-cell plasma model is built in the R–Z plane to investigate the low-frequency plasma oscillations in the discharge channel of a 5 kW LHT-140 Hall thruster. In addition to the elastic, excitation, and ionization collisions between neutral atoms and electrons, the Coulomb collisions between electrons and electrons and between electrons and ions are analyzed. The sheath characteristic distortion is also corrected. Simulation results indicate the capability of the built model to reproduce the low-frequency oscillation with high accuracy. The oscillations of the discharge current and ion density produced by the model are consistent with the existing conclusions. The model predicts a frequency that is consistent with that calculated by the zero-dimensional theoretical model.

  19. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  20. The use of a 3D laser scanner using superimpositional software to assess the accuracy of impression techniques.

    PubMed

    Shah, Sinal; Sundaram, Geeta; Bartlett, David; Sherriff, Martyn

    2004-11-01

    Several studies have made comparisons in the dimensional accuracy of different elastomeric impression materials. Most have used two-dimensional measuring devices, which neglect to account for the dimensional changes that exist along a three-dimensional surface. The aim of this study was to compare the dimensional accuracy of an impression technique using a polyether material (Impregum) and a vinyl poly siloxane material (President) using a laser scanner with three-dimensional superimpositional software. Twenty impressions, 10 with a polyether and 10 with addition silicone, of a stone master model that resembled a dental arch containing three acrylic posterior teeth were cast in orthodontic stone. One plastic tooth was prepared for a metal crown. The master model and the casts were digitised with the non-contacting laser scanner to produce a 3D image. 3D surface viewer software superimposed the master model to the stone replica and the difference between the images analysed. The mean difference between the model and the stone replica made from Impregum was 0.072mm (SD 0.006) and that for the silicone 0.097mm (SD 0.005) and this difference was statistically significantly, p=0.001. Both impression materials provided an accurate replica of the prepared teeth supporting the view that these materials are highly accurate.

  1. Statistics of transmission eigenvalues in two-dimensional quantum cavities: Ballistic versus stochastic scattering

    NASA Astrophysics Data System (ADS)

    Rotter, Stefan; Aigner, Florian; Burgdörfer, Joachim

    2007-03-01

    We investigate the statistical distribution of transmission eigenvalues in phase-coherent transport through quantum dots. In two-dimensional ab initio simulations for both clean and disordered two-dimensional cavities, we find markedly different quantum-to-classical crossover scenarios for these two cases. In particular, we observe the emergence of “noiseless scattering states” in clean cavities, irrespective of sharp-edged entrance and exit lead mouths. We find the onset of these “classical” states to be largely independent of the cavity’s classical chaoticity, but very sensitive with respect to bulk disorder. Our results suggest that for weakly disordered cavities, the transmission eigenvalue distribution is determined both by scattering at the disorder potential and the cavity walls. To properly account for this intermediate parameter regime, we introduce a hybrid crossover scheme, which combines previous models that are valid in the ballistic and the stochastic limit, respectively.

  2. Recurrent flow analysis in spatiotemporally chaotic 2-dimensional Kolmogorov flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Dan, E-mail: dan.lucas@ucd.ie; Kerswell, Rich R., E-mail: r.r.kerswell@bris.ac.uk

    2015-04-15

    Motivated by recent success in the dynamical systems approach to transitional flow, we study the efficiency and effectiveness of extracting simple invariant sets (recurrent flows) directly from chaotic/turbulent flows and the potential of these sets for providing predictions of certain statistics of the flow. Two-dimensional Kolmogorov flow (the 2D Navier-Stokes equations with a sinusoidal body force) is studied both over a square [0, 2π]{sup 2} torus and a rectangular torus extended in the forcing direction. In the former case, an order of magnitude more recurrent flows are found than previously [G. J. Chandler and R. R. Kerswell, “Invariant recurrent solutionsmore » embedded in a turbulent two-dimensional Kolmogorov flow,” J. Fluid Mech. 722, 554–595 (2013)] and shown to give improved predictions for the dissipation and energy pdfs of the chaos via periodic orbit theory. Analysis of the recurrent flows shows that the energy is largely trapped in the smallest wavenumbers through a combination of the inverse cascade process and a feature of the advective nonlinearity in 2D. Over the extended torus at low forcing amplitudes, some extracted states mimic the statistics of the spatially localised chaos present surprisingly well recalling the findings of Kawahara and Kida [“Periodic motion embedded in plane Couette turbulence: Regeneration cycle and burst,” J. Fluid Mech. 449, 291 (2001)] in low-Reynolds-number plane Couette flow. At higher forcing amplitudes, however, success is limited highlighting the increased dimensionality of the chaos and the need for larger data sets. Algorithmic developments to improve the extraction procedure are discussed.« less

  3. Statistics of Lyapunov exponents of quasi-one-dimensional disordered systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-Yang; Xiong, Shi-Jie

    2005-10-01

    Statistical properties of Lyapunov exponents (LE) are numerically calculated in a quasi-one-dimensional (1D) Anderson model, which is in a 2D or 3D lattice with a finite cross section. The single-parameter scaling (SPS) variable τ relating the Lyapunov exponents γ and their variances σ by τ≡σ2L/⟨γ⟩ is calculated for different lateral coupling t and disorder strength W . In a wide range of t , τ is approximately independent of W , but it has different values for LEs in different channels. For small t , the distribution of the smallest LE is non-Gaussian and τ strongly depends on W , remarkably different from the 1D SPS hypothesis.

  4. Heat balance statistics derived from four-dimensional assimilations with a global circulation model

    NASA Technical Reports Server (NTRS)

    Schubert, S. D.; Herman, G. F.

    1981-01-01

    The reported investigation was conducted to develop a reliable procedure for obtaining the diabatic and vertical terms required for atmospheric heat balance studies. The method developed employs a four-dimensional assimilation mode in connection with the general circulation model of NASA's Goddard Laboratory for Atmospheric Sciences. The initial analysis was conducted with data obtained in connection with the 1976 Data Systems Test. On the basis of the results of the investigation, it appears possible to use the model's observationally constrained diagnostics to provide estimates of the global distribution of virtually all of the quantities which are needed to compute the atmosphere's heat and energy balance.

  5. A low-dimensional analogue of holographic baryons

    NASA Astrophysics Data System (ADS)

    Bolognesi, Stefano; Sutcliffe, Paul

    2014-04-01

    Baryons in holographic QCD correspond to topological solitons in the bulk. The most prominent example is the Sakai-Sugimoto model, where the bulk soliton in the five-dimensional spacetime of AdS-type can be approximated by the flat space self-dual Yang-Mills instanton with a small size. Recently, the validity of this approximation has been verified by comparison with the numerical field theory solution. However, multi-solitons and solitons with finite density are currently beyond numerical field theory computations. Various approximations have been applied to investigate these important issues and have led to proposals for finite density configurations that include dyonic salt and baryonic popcorn. Here we introduce and investigate a low-dimensional analogue of the Sakai-Sugimoto model, in which the bulk soliton can be approximated by a flat space sigma model instanton. The bulk theory is a baby Skyrme model in a three-dimensional spacetime with negative curvature. The advantage of the lower-dimensional theory is that numerical simulations of multi-solitons and finite density solutions can be performed and compared with flat space instanton approximations. In particular, analogues of dyonic salt and baryonic popcorn configurations are found and analysed.

  6. Clinical application of the five-factor model.

    PubMed

    Widiger, Thomas A; Presnall, Jennifer Ruth

    2013-12-01

    The Five-Factor Model (FFM) has become the predominant dimensional model of general personality structure. The purpose of this paper is to suggest a clinical application. A substantial body of research indicates that the personality disorders included within the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) can be understood as extreme and/or maladaptive variants of the FFM (the acronym "DSM" refers to any particular edition of the APA DSM). In addition, the current proposal for the forthcoming fifth edition of the DSM (i.e., DSM-5) is shifting closely toward an FFM dimensional trait model of personality disorder. Advantages of this shifting conceptualization are discussed, including treatment planning. © 2012 Wiley Periodicals, Inc.

  7. Decoherence-induced conductivity in the one-dimensional Anderson model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stegmann, Thomas; Wolf, Dietrich E.; Ujsághy, Orsolya

    We study the effect of decoherence on the electron transport in the one-dimensional Anderson model by means of a statistical model [1, 2, 3, 4, 5]. In this model decoherence bonds are randomly distributed within the system, at which the electron phase is randomized completely. Afterwards, the transport quantity of interest (e.g. resistance or conductance) is ensemble averaged over the decoherence configurations. Averaging the resistance of the sample, the calculation can be performed analytically. In the thermodynamic limit, we find a decoherence-driven transition from the quantum-coherent localized regime to the Ohmic regime at a critical decoherence density, which is determinedmore » by the second-order generalized Lyapunov exponent (GLE) [4].« less

  8. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  9. Investigation of dimensional variation in parts manufactured by fused deposition modeling using Gauge Repeatability and Reproducibility

    NASA Astrophysics Data System (ADS)

    Mohamed, Omar Ahmed; Hasan Masood, Syed; Lal Bhowmik, Jahar

    2018-02-01

    In the additive manufacturing (AM) market, the question is raised by industry and AM users on how reproducible and repeatable the fused deposition modeling (FDM) process is in providing good dimensional accuracy. This paper aims to investigate and evaluate the repeatability and reproducibility of the FDM process through a systematic approach to answer this frequently asked question. A case study based on the statistical gage repeatability and reproducibility (gage R&R) technique is proposed to investigate the dimensional variations in the printed parts of the FDM process. After running the simulation and analysis of the data, the FDM process capability is evaluated, which would help the industry for better understanding the performance of FDM technology.

  10. Effects of wall curvature on turbulence statistics

    NASA Technical Reports Server (NTRS)

    Moser, R. D.; Moin, P.

    1985-01-01

    A three-dimensional, time-dependent, direct numerical simulation of low-Reynolds number turbulent flow in a mildly curved channel was performed, and the results examined to determine the mechanism by which curvature affects wall-bounded turbulent shear flows. A spectral numerical method with about one-million modes was employed, and no explicit subgrid scale model was used. The effects of curvature on this flow were determined by comparing the concave and convex sides of the channel. The observed effects are consistent with experimental observations for mild curvature. The most significant difference in the turbulence statistics between the concave and convex sides is in the Reynolds shear stress. This is accompanied by significant differences in the terms of the Reynolds shear stress balance equations. In addition, it was found that stationary Taylor-Goertler vortices were present and that they had a significant effect on the flow by contributing to the mean Reynolds shear stress, and by enhancing the difference between the wall shear stresses.

  11. State estimation and prediction using clustered particle filters.

    PubMed

    Lee, Yoonsang; Majda, Andrew J

    2016-12-20

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.

  12. State estimation and prediction using clustered particle filters

    PubMed Central

    Lee, Yoonsang; Majda, Andrew J.

    2016-01-01

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332

  13. Global isostatic geoid anomalies for plate and boundary layer models of the lithosphere

    NASA Technical Reports Server (NTRS)

    Hager, B. H.

    1981-01-01

    Commonly used one dimensional geoid models predict that the isostatic geoid anomaly over old ocean basins for the boundary layer thermal model of the lithosphere is a factor of two greater than that for the plate model. Calculations presented, using the spherical analogues of the plate and boundary layer thermal models, show that for the actual global distribution of plate ages, one dimensional models are not accurate and a spherical, fully three dimensional treatment is necessary. The maximum difference in geoid heights predicted for the two models is only about two meters. The thermal structure of old lithosphere is unlikely to be resolvable using global geoid anomalies. Stripping the effects of plate aging and a hypothetical uniform, 35 km, isostatically-compensated continental crust from the observed geoid emphasizes that the largest-amplitude geoid anomaly is the geoid low of almost 120 m over West Antarctica, a factor of two greater than the low of 60 m over Ceylon.

  14. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    NASA Astrophysics Data System (ADS)

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  15. Rough surface reconstruction for ultrasonic NDE simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Wonjae; Shi, Fan; Lowe, Michael J. S.

    2014-02-18

    The reflection of ultrasound from rough surfaces is an important topic for the NDE of safety-critical components, such as pressure-containing components in power stations. The specular reflection from a rough surface of a defect is normally lower than it would be from a flat surface, so it is typical to apply a safety factor in order that justification cases for inspection planning are conservative. The study of the statistics of the rough surfaces that might be expected in candidate defects according to materials and loading, and the reflections from them, can be useful to develop arguments for realistic safety factors.more » This paper presents a study of real rough crack surfaces that are representative of the potential defects in pressure-containing power plant. Two-dimensional (area) values of the height of the roughness have been measured and their statistics analysed. Then a means to reconstruct model cases with similar statistics, so as to enable the creation of multiple realistic realizations of the surfaces, has been investigated, using random field theory. Rough surfaces are reconstructed, based on a real surface, and results for these two-dimensional descriptions of the original surface have been compared with those from the conventional model based on a one-dimensional correlation coefficient function. In addition, ultrasonic reflections from them are simulated using a finite element method.« less

  16. Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery

    DTIC Science & Technology

    2016-09-27

    16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world

  17. Scalable Learning for Geostatistics and Speaker Recognition

    DTIC Science & Technology

    2011-01-01

    of prior knowledge of the model or due to improved robustness requirements). Both these methods have their own advantages and disadvantages. The use...application. If the data is well-correlated and low-dimensional, any prior knowledge available on the data can be used to build a parametric model. In the...absence of prior knowledge , non-parametric methods can be used. If the data is high-dimensional, PCA based dimensionality reduction is often the first

  18. iCFD: Interpreted Computational Fluid Dynamics - Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design - The secondary clarifier.

    PubMed

    Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy

    2015-10-15

    The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy - taking a reasonable computational effort - when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems' performance under normal and shock events. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Three dimensional ray tracing of the Jovian magnetosphere in the low frequency range

    NASA Technical Reports Server (NTRS)

    Menietti, J. D.

    1984-01-01

    Ray tracing studies of Jovian low frequency emissions were studied. A comprehensive three-dimensional ray tracing computer code for examination of model Jovian decametric (DAM) emission was developed. The improvements to the computer code are outlined and described. The results of the ray tracings of Jovian emissions will be presented in summary form.

  20. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    PubMed

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low convolutional layers of VGG19 are considerably larger than the value of approximately 100 reported for IT neurons in Lehky et al. ( 2014 ), whereas those at the high fully connected layers are close to or lower than 100. To the best of our knowledge, this work is the first attempt to analyze the response statistics of DNN neurons with respect to primate IT neurons in image object representation.

  1. Discharge Chamber Primary Electron Modeling Activities in Three-Dimensions

    NASA Technical Reports Server (NTRS)

    Steuber, Thomas J.

    2004-01-01

    Designing discharge chambers for ion thrusters involves many geometric configuration decisions. Various decisions will impact discharge chamber performance with respect to propellant utilization efficiency, ion production costs, and grid lifetime. These hardware design decisions can benefit from the assistance of computational modeling. Computational modeling for discharge chambers has been limited to two-dimensional codes that leveraged symmetry for interpretation into three-dimensional analysis. This paper presents model development activities towards a three-dimensional discharge chamber simulation to aid discharge chamber design decisions. Specifically, of the many geometric configuration decisions toward attainment of a worthy discharge chamber, this paper focuses on addressing magnetic circuit considerations with a three-dimensional discharge chamber simulation as a tool. With this tool, candidate discharge chamber magnetic circuit designs can be analyzed computationally to gain insight into factors that may influence discharge chamber performance such as: primary electron loss width in magnetic cusps, cathode tip position with respect to the low magnetic field volume, definition of a low magnetic field region, and maintenance of a low magnetic field region across the grid span. Corroborating experimental data will be obtained from mockup hardware tests. Initially, simulated candidate magnetic circuit designs will resemble previous successful thruster designs. To provide opportunity to improve beyond previous performance benchmarks, off-design modifications will be simulated and experimentally tested.

  2. Investigation of conditions for the generation and propagation of low-frequency disturbances in the troposphere

    NASA Astrophysics Data System (ADS)

    Mordvinov, V. I.; Devyatova, E. V.; Kochetkova, O. S.; Oznobikhina, O. A.

    2013-01-01

    Low-frequency disturbances responsible for the excitation of torsional oscillations—variations in the zonal mean flow intensity with a characteristic scale of 15-20 days—propagating along the meridian at mid and low latitudes of both hemispheres are investigated [1]. As data observed over the eastern parts of continents and the western parts of oceans are processed with the lag correlation statistics, traveling waves intersecting the eastern parts of continents from northwest to southeast and then returning to the north along the ocean coasts are identified. In this case, trains of anomalies oriented in the zonal direction periodically appear and are destructed in the western parts of continents. The simulation of the propagation of disturbances in the quasi-geostrophic approximation made it possible to explain the specific features of lag correlation statistics over continents by the dispersion of two-dimensional Rossby waves from traveling sources. The turnover of disturbances over Asia and wave trains to the west from the pole were reproduced. Torsional oscillations caused by the dispersion of two-dimensional Rossby waves have a characteristic form of inclined bands in the latitude-time diagram, whose steepness is controlled by the velocity of displacement of the vorticity source along the meridian.

  3. Mapping morphological shape as a high-dimensional functional curve

    PubMed Central

    Fu, Guifang; Huang, Mian; Bo, Wenhao; Hao, Han; Wu, Rongling

    2018-01-01

    Abstract Detecting how genes regulate biological shape has become a multidisciplinary research interest because of its wide application in many disciplines. Despite its fundamental importance, the challenges of accurately extracting information from an image, statistically modeling the high-dimensional shape and meticulously locating shape quantitative trait loci (QTL) affect the progress of this research. In this article, we propose a novel integrated framework that incorporates shape analysis, statistical curve modeling and genetic mapping to detect significant QTLs regulating variation of biological shape traits. After quantifying morphological shape via a radius centroid contour approach, each shape, as a phenotype, was characterized as a high-dimensional curve, varying as angle θ runs clockwise with the first point starting from angle zero. We then modeled the dynamic trajectories of three mean curves and variation patterns as functions of θ. Our framework led to the detection of a few significant QTLs regulating the variation of leaf shape collected from a natural population of poplar, Populus szechuanica var tibetica. This population, distributed at altitudes 2000–4500 m above sea level, is an evolutionarily important plant species. This is the first work in the quantitative genetic shape mapping area that emphasizes a sense of ‘function’ instead of decomposing the shape into a few discrete principal components, as the majority of shape studies do. PMID:28062411

  4. Diagnostic index of three-dimensional osteoarthritic changes in temporomandibular joint condylar morphology

    PubMed Central

    Gomes, Liliane R.; Gomes, Marcelo; Jung, Bryan; Paniagua, Beatriz; Ruellas, Antonio C.; Gonçalves, João Roberto; Styner, Martin A.; Wolford, Larry; Cevidanes, Lucia

    2015-01-01

    Abstract. This study aimed to investigate imaging statistical approaches for classifying three-dimensional (3-D) osteoarthritic morphological variations among 169 temporomandibular joint (TMJ) condyles. Cone-beam computed tomography scans were acquired from 69 subjects with long-term TMJ osteoarthritis (OA), 15 subjects at initial diagnosis of OA, and 7 healthy controls. Three-dimensional surface models of the condyles were constructed and SPHARM-PDM established correspondent points on each model. Multivariate analysis of covariance and direction-projection-permutation (DiProPerm) were used for testing statistical significance of the differences between the groups determined by clinical and radiographic diagnoses. Unsupervised classification using hierarchical agglomerative clustering was then conducted. Compared with healthy controls, OA average condyle was significantly smaller in all dimensions except its anterior surface. Significant flattening of the lateral pole was noticed at initial diagnosis. We observed areas of 3.88-mm bone resorption at the superior surface and 3.10-mm bone apposition at the anterior aspect of the long-term OA average model. DiProPerm supported a significant difference between the healthy control and OA group (p-value=0.001). Clinically meaningful unsupervised classification of TMJ condylar morphology determined a preliminary diagnostic index of 3-D osteoarthritic changes, which may be the first step towards a more targeted diagnosis of this condition. PMID:26158119

  5. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-02

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.

  6. Entanglement entropy in critical phenomena and analog models of quantum gravity

    NASA Astrophysics Data System (ADS)

    Fursaev, Dmitri V.

    2006-06-01

    A general geometrical structure of the entanglement entropy for spatial partition of a relativistic QFT system is established by using methods of the effective gravity action and the spectral geometry. A special attention is payed to the subleading terms in the entropy in different dimensions and to behavior in different states. It is conjectured, on the base of relation between the entropy and the action, that in a fundamental theory the ground state entanglement entropy per unit area equals 1/(4GN), where GN is the Newton constant in the low-energy gravity sector of the theory. The conjecture opens a new avenue in analogue gravity models. For instance, in higher-dimensional condensed matter systems, which near a critical point are described by relativistic QFT’s, the entanglement entropy density defines an effective gravitational coupling. By studying the properties of this constant one can get new insights in quantum gravity phenomena, such as the universality of the low-energy physics, the renormalization group behavior of GN, the statistical meaning of the Bekenstein-Hawking entropy.

  7. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  8. Plate Tectonics in the Classification of Personality Disorder: Shifting to a Dimensional Model

    ERIC Educational Resources Information Center

    Widiger, Thomas A.; Trull, Timothy J.

    2007-01-01

    The diagnostic categories of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders were developed in the spirit of a traditional medical model that considers mental disorders to be qualitatively distinct conditions (see, e.g., American Psychiatric Association, 2000). Work is now beginning on the fifth edition…

  9. From atoms to steps: The microscopic origins of crystal evolution

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2014-07-01

    The Burton-Cabrera-Frank (BCF) theory of crystal growth has been successful in describing a wide range of phenomena in surface physics. Typical crystal surfaces are slightly misoriented with respect to a facet plane; thus, the BCF theory views such systems as composed of staircase-like structures of steps separating terraces. Adsorbed atoms (adatoms), which are represented by a continuous density, diffuse on terraces, and steps move by absorbing or emitting these adatoms. Here we shed light on the microscopic origins of the BCF theory by deriving a simple, one-dimensional (1D) version of the theory from an atomistic, kinetic restricted solid-on-solid (KRSOS) model without external material deposition. We define the time-dependent adatom density and step position as appropriate ensemble averages in the KRSOS model, thereby exposing the non-equilibrium statistical mechanics origins of the BCF theory. Our analysis reveals that the BCF theory is valid in a low adatom-density regime, much in the same way that an ideal gas approximation applies to dilute gasses. We find conditions under which the surface remains in a low-density regime and discuss the microscopic origin of corrections to the BCF model.

  10. Statistics of the stochastically forced Lorenz attractor by the Fokker-Planck equation and cumulant expansions.

    PubMed

    Allawala, Altan; Marston, J B

    2016-11-01

    We investigate the Fokker-Planck description of the equal-time statistics of the three-dimensional Lorenz attractor with additive white noise. The invariant measure is found by computing the zero (or null) mode of the linear Fokker-Planck operator as a problem of sparse linear algebra. Two variants are studied: a self-adjoint construction of the linear operator and the replacement of diffusion with hyperdiffusion. We also access the low-order statistics of the system by a perturbative expansion in equal-time cumulants. A comparison is made to statistics obtained by the standard approach of accumulation via direct numerical simulation. Theoretical and computational aspects of the Fokker-Planck and cumulant expansion methods are discussed.

  11. Model of chiral spin liquids with Abelian and non-Abelian topological phases

    NASA Astrophysics Data System (ADS)

    Chen, Jyong-Hao; Mudry, Christopher; Chamon, Claudio; Tsvelik, A. M.

    2017-12-01

    We present a two-dimensional lattice model for quantum spin-1/2 for which the low-energy limit is governed by four flavors of strongly interacting Majorana fermions. We study this low-energy effective theory using two alternative approaches. The first consists of a mean-field approximation. The second consists of a random phase approximation (RPA) for the single-particle Green's functions of the Majorana fermions built from their exact forms in a certain one-dimensional limit. The resulting phase diagram consists of two competing chiral phases, one with Abelian and the other with non-Abelian topological order, separated by a continuous phase transition. Remarkably, the Majorana fermions propagate in the two-dimensional bulk, as in the Kitaev model for a spin liquid on the honeycomb lattice. We identify the vison fields, which are mobile (they are static in the Kitaev model) domain walls propagating along only one of the two space directions.

  12. Genetic demixing and evolution in linear stepping stone models

    NASA Astrophysics Data System (ADS)

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-04-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial experiments on range expansions of inoculations of Escherichia coli and Saccharomyces cerevisiae.

  13. An Exploration of Latent Structure in Observational Huntington’s Disease Studies

    PubMed Central

    Ghosh, Soumya; Sun, Zhaonan; Li, Ying; Cheng, Yu; Mohan, Amrita; Sampaio, Cristina; Hu, Jianying

    2017-01-01

    Huntington’s disease (HD) is a monogenic neurodegenerative disorder characterized by the progressive decay of motor and cognitive abilities accompanied by psychiatric episodes. Tracking and modeling the progression of the multi-faceted clinical symptoms of HD is a challenging problem that has important implications for staging of HD patients and the development of improved enrollment criteria for future HD studies and trials. In this paper, we describe the first steps towards this goal. We begin by curating data from four recent observational HD studies, each containing a diverse collection of clinical assessments. The resulting dataset is unprecedented in size and contains data from 19,269 study participants. By analyzing this large dataset, we are able to discover hidden low dimensional structure in the data that correlates well with surrogate measures of HD progression. The discovered structures are promising candidates for future consumption by downstream statistical HD progression models. PMID:28815114

  14. Expected social utility of life time in the presence of a chronic disease.

    PubMed

    Mulder, P G; Hempenius, A L

    1993-10-01

    Interventive action aimed at reducing the incidence of an irreversible chronic noncommunicable disease in a population has various effects. Hopefully, it increases total longevity in the population and it causes the disease to develop later in time in a smaller portion of the population. In this paper a statistical model is built by which these effects can be estimated. A three dimensional probability density function that underlies this model is changed by the interventive action. It is shown how a three dimensional utility function can be defined to appropriately judge this change.

  15. Path integral molecular dynamics for exact quantum statistics of multi-electronic-state systems.

    PubMed

    Liu, Xinzijian; Liu, Jian

    2018-03-14

    An exact approach to compute physical properties for general multi-electronic-state (MES) systems in thermal equilibrium is presented. The approach is extended from our recent progress on path integral molecular dynamics (PIMD), Liu et al. [J. Chem. Phys. 145, 024103 (2016)] and Zhang et al. [J. Chem. Phys. 147, 034109 (2017)], for quantum statistical mechanics when a single potential energy surface is involved. We first define an effective potential function that is numerically favorable for MES-PIMD and then derive corresponding estimators in MES-PIMD for evaluating various physical properties. Its application to several representative one-dimensional and multi-dimensional models demonstrates that MES-PIMD in principle offers a practical tool in either of the diabatic and adiabatic representations for studying exact quantum statistics of complex/large MES systems when the Born-Oppenheimer approximation, Condon approximation, and harmonic bath approximation are broken.

  16. Path integral molecular dynamics for exact quantum statistics of multi-electronic-state systems

    NASA Astrophysics Data System (ADS)

    Liu, Xinzijian; Liu, Jian

    2018-03-01

    An exact approach to compute physical properties for general multi-electronic-state (MES) systems in thermal equilibrium is presented. The approach is extended from our recent progress on path integral molecular dynamics (PIMD), Liu et al. [J. Chem. Phys. 145, 024103 (2016)] and Zhang et al. [J. Chem. Phys. 147, 034109 (2017)], for quantum statistical mechanics when a single potential energy surface is involved. We first define an effective potential function that is numerically favorable for MES-PIMD and then derive corresponding estimators in MES-PIMD for evaluating various physical properties. Its application to several representative one-dimensional and multi-dimensional models demonstrates that MES-PIMD in principle offers a practical tool in either of the diabatic and adiabatic representations for studying exact quantum statistics of complex/large MES systems when the Born-Oppenheimer approximation, Condon approximation, and harmonic bath approximation are broken.

  17. Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Kogelbauer, Florian; Haller, George

    2018-06-01

    We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.

  18. Efficient ensemble forecasting of marine ecology with clustered 1D models and statistical lateral exchange: application to the Red Sea

    NASA Astrophysics Data System (ADS)

    Dreano, Denis; Tsiaras, Kostas; Triantafyllou, George; Hoteit, Ibrahim

    2017-07-01

    Forecasting the state of large marine ecosystems is important for many economic and public health applications. However, advanced three-dimensional (3D) ecosystem models, such as the European Regional Seas Ecosystem Model (ERSEM), are computationally expensive, especially when implemented within an ensemble data assimilation system requiring several parallel integrations. As an alternative to 3D ecological forecasting systems, we propose to implement a set of regional one-dimensional (1D) water-column ecological models that run at a fraction of the computational cost. The 1D model domains are determined using a Gaussian mixture model (GMM)-based clustering method and satellite chlorophyll-a (Chl-a) data. Regionally averaged Chl-a data is assimilated into the 1D models using the singular evolutive interpolated Kalman (SEIK) filter. To laterally exchange information between subregions and improve the forecasting skills, we introduce a new correction step to the assimilation scheme, in which we assimilate a statistical forecast of future Chl-a observations based on information from neighbouring regions. We apply this approach to the Red Sea and show that the assimilative 1D ecological models can forecast surface Chl-a concentration with high accuracy. The statistical assimilation step further improves the forecasting skill by as much as 50%. This general approach of clustering large marine areas and running several interacting 1D ecological models is very flexible. It allows many combinations of clustering, filtering and regression technics to be used and can be applied to build efficient forecasting systems in other large marine ecosystems.

  19. Magnetofermionic condensate in two dimensions

    PubMed Central

    Kulik, L. V.; Zhuravlev, A. S.; Dickmann, S.; Gorbunov, A. V.; Timofeev, V. B.; Kukushkin, I. V.; Schmult, S.

    2016-01-01

    Coherent condensate states of particles obeying either Bose or Fermi statistics are in the focus of interest in modern physics. Here we report on condensation of collective excitations with Bose statistics, cyclotron magnetoexcitons, in a high-mobility two-dimensional electron system in a magnetic field. At low temperatures, the dense non-equilibrium ensemble of long-lived triplet magnetoexcitons exhibits both a drastic reduction in the viscosity and a steep enhancement in the response to the external electromagnetic field. The observed effects are related to formation of a super-absorbing state interacting coherently with the electromagnetic field. Simultaneously, the electrons below the Fermi level form a super-emitting state. The effects are explicable from the viewpoint of a coherent condensate phase in a non-equilibrium system of two-dimensional fermions with a fully quantized energy spectrum. The condensation occurs in the space of vectors of magnetic translations, a property providing a completely new landscape for future physical investigations. PMID:27848969

  20. Single-particle cryo-EM using alignment by classification (ABC): the structure of Lumbricus terrestris haemoglobin.

    PubMed

    Afanasyev, Pavel; Seer-Linnemayr, Charlotte; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; Alewijnse, Bart; Portugal, Rodrigo V; Pannu, Navraj S; Schatz, Michael; van Heel, Marin

    2017-09-01

    Single-particle cryogenic electron microscopy (cryo-EM) can now yield near-atomic resolution structures of biological complexes. However, the reference-based alignment algorithms commonly used in cryo-EM suffer from reference bias, limiting their applicability (also known as the 'Einstein from random noise' problem). Low-dose cryo-EM therefore requires robust and objective approaches to reveal the structural information contained in the extremely noisy data, especially when dealing with small structures. A reference-free pipeline is presented for obtaining near-atomic resolution three-dimensional reconstructions from heterogeneous ('four-dimensional') cryo-EM data sets. The methodologies integrated in this pipeline include a posteriori camera correction, movie-based full-data-set contrast transfer function determination, movie-alignment algorithms, (Fourier-space) multivariate statistical data compression and unsupervised classification, 'random-startup' three-dimensional reconstructions, four-dimensional structural refinements and Fourier shell correlation criteria for evaluating anisotropic resolution. The procedures exclusively use information emerging from the data set itself, without external 'starting models'. Euler-angle assignments are performed by angular reconstitution rather than by the inherently slower projection-matching approaches. The comprehensive 'ABC-4D' pipeline is based on the two-dimensional reference-free 'alignment by classification' (ABC) approach, where similar images in similar orientations are grouped by unsupervised classification. Some fundamental differences between X-ray crystallography versus single-particle cryo-EM data collection and data processing are discussed. The structure of the giant haemoglobin from Lumbricus terrestris at a global resolution of ∼3.8 Å is presented as an example of the use of the ABC-4D procedure.

  1. Empirically Derived Personality Subtyping for Predicting Clinical Symptoms and Treatment Response in Bulimia Nervosa

    PubMed Central

    Haynos, Ann F.; Pearson, Carolyn M.; Utzinger, Linsey M.; Wonderlich, Stephen A.; Crosby, Ross D.; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.

    2016-01-01

    Objective Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Methods Using variables from the Dimensional Assessment of Personality Pathology–Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). Results There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = .03) and purging (p = .01) frequency at EOT and binge eating frequency at follow-up (p = .045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = .04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Discussion Empirically derived personality subtyping is appears to be a valid classification system with potential to guide eating disorder treatment decisions. PMID:27611235

  2. Low temperature exciton dynamics and structural changes in perylene bisimide aggregates

    NASA Astrophysics Data System (ADS)

    Wolter, Steffen; Magnus Westphal, Karl; Hempel, Magdalena; Würthner, Frank; Kühn, Oliver; Lochbrunner, Stefan

    2017-09-01

    The temperature dependent exciton dynamics of J-aggregates formed by a perylene bisimide dye is investigated down to liquid nitrogen temperature (77 K) by femtosecond pump-probe spectroscopy. The analysis of the transient absorption data using a diffusion model for the excitons does not only reveal an overall decrease of the exciton mobility, but also a change in the dimensionality of the exciton transport at low temperatures. This change in dimensionality is further investigated by kinetic Monte Carlo simulations, identifying weakly interlinked one-dimensional aggregate chains as the most likely structure at low temperatures. This causes the exciton transport to be highly anisotropic.

  3. Harnessing Sparse and Low-Dimensional Structures for Robust Clustering of Imagery Data

    ERIC Educational Resources Information Center

    Rao, Shankar Ramamohan

    2009-01-01

    We propose a robust framework for clustering data. In practice, data obtained from real measurement devices can be incomplete, corrupted by gross errors, or not correspond to any assumed model. We show that, by properly harnessing the intrinsic low-dimensional structure of the data, these kinds of practical problems can be dealt with in a uniform…

  4. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less

  5. Symbolic dynamics techniques for complex systems: Application to share price dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Dan; Beck, Christian

    2017-05-01

    The symbolic dynamics technique is well known for low-dimensional dynamical systems and chaotic maps, and lies at the roots of the thermodynamic formalism of dynamical systems. Here we show that this technique can also be successfully applied to time series generated by complex systems of much higher dimensionality. Our main example is the investigation of share price returns in a coarse-grained way. A nontrivial spectrum of Rényi entropies is found. We study how the spectrum depends on the time scale of returns, the sector of stocks considered, as well as the number of symbols used for the symbolic description. Overall our analysis confirms that in the symbol space transition probabilities of observed share price returns depend on the entire history of previous symbols, thus emphasizing the need for a modelling based on non-Markovian stochastic processes. Our method allows for quantitative comparisons of entirely different complex systems, for example the statistics of symbol sequences generated by share price returns using 4 symbols can be compared with that of genomic sequences.

  6. [Analysis of the movement of long axis and the distribution of principal stress in abutment tooth retained by conical telescope].

    PubMed

    Lin, Ying-he; Man, Yi; Qu, Yi-li; Guan, Dong-hua; Lu, Xuan; Wei, Na

    2006-01-01

    To study the movement of long axis and the distribution of principal stress in the abutment teeth in removable partial denture which is retained by use of conical telescope. An ideal three dimensional finite element model was constructed by using SCT image reconstruction technique, self-programming and ANSYS software. The static loads were applied. The displacement of the long axis and the distribution of the principal stress in the abutment teeth was analyzed. There is no statistic difference of displacenat and stress distribution among different three-dimensional finite element models. Generally, the abutment teeth move along the long axis itself. Similar stress distribution was observed in each three-dimensional finite element model. The maximal principal compressive stress was observed at the distal cervix of the second premolar. The abutment teeth can be well protected by use of conical telescope.

  7. Towards sound epistemological foundations of statistical methods for high-dimensional biology.

    PubMed

    Mehta, Tapan; Tanik, Murat; Allison, David B

    2004-09-01

    A sound epistemological foundation for biological inquiry comes, in part, from application of valid statistical procedures. This tenet is widely appreciated by scientists studying the new realm of high-dimensional biology, or 'omic' research, which involves multiplicity at unprecedented scales. Many papers aimed at the high-dimensional biology community describe the development or application of statistical techniques. The validity of many of these is questionable, and a shared understanding about the epistemological foundations of the statistical methods themselves seems to be lacking. Here we offer a framework in which the epistemological foundation of proposed statistical methods can be evaluated.

  8. Statistical properties of a cloud ensemble - A numerical study

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Simpson, Joanne; Soong, Su-Tzai

    1987-01-01

    The statistical properties of cloud ensembles under a specified large-scale environment, such as mass flux by cloud drafts and vertical velocity as well as the condensation and evaporation associated with these cloud drafts, are examined using a three-dimensional numerical cloud ensemble model described by Soong and Ogura (1980) and Tao and Soong (1986). The cloud drafts are classified as active and inactive, and separate contributions to cloud statistics in areas of different cloud activity are then evaluated. The model results compare well with results obtained from aircraft measurements of a well-organized ITCZ rainband that occurred on August 12, 1974, during the Global Atmospheric Research Program's Atlantic Tropical Experiment.

  9. Investigation of the effects of storage time on the dimensional accuracy of impression materials using cone beam computed tomography

    PubMed Central

    2016-01-01

    PURPOSE The storage conditions of impressions affect the dimensional accuracy of the impression materials. The aim of the study was to assess the effects of storage time on dimensional accuracy of five different impression materials by cone beam computed tomography (CBCT). MATERIALS AND METHODS Polyether (Impregum), hydrocolloid (Hydrogum and Alginoplast), and silicone (Zetaflow and Honigum) impression materials were used for impressions taken from an acrylic master model. The impressions were poured and subjected to four different storage times: immediate use, and 1, 3, and 5 days of storage. Line 1 (between right and left first molar mesiobuccal cusp tips) and Line 2 (between right and left canine tips) were measured on a CBCT scanned model, and time dependent mean differences were analyzed by two-way univariate and Duncan's test (α=.05). RESULTS For Line 1, the total mean difference of Impregum and Hydrogum were statistically different from Alginoplast (P<.05), while Zetaflow and Honigum had smaller discrepancies. Alginoplast resulted in more difference than the other impressions (P<.05). For Line 2, the total mean difference of Impregum was statistically different from the other impressions. Significant differences were observed in Line 1 and Line 2 for the different storage periods (P<.05). CONCLUSION The dimensional accuracy of impression material is clinically acceptable if the impression material is stored in suitable conditions. PMID:27826388

  10. Investigation of the effects of storage time on the dimensional accuracy of impression materials using cone beam computed tomography.

    PubMed

    Alkurt, Murat; Yeşıl Duymus, Zeynep; Dedeoglu, Numan

    2016-10-01

    The storage conditions of impressions affect the dimensional accuracy of the impression materials. The aim of the study was to assess the effects of storage time on dimensional accuracy of five different impression materials by cone beam computed tomography (CBCT). Polyether (Impregum), hydrocolloid (Hydrogum and Alginoplast), and silicone (Zetaflow and Honigum) impression materials were used for impressions taken from an acrylic master model. The impressions were poured and subjected to four different storage times: immediate use, and 1, 3, and 5 days of storage. Line 1 (between right and left first molar mesiobuccal cusp tips) and Line 2 (between right and left canine tips) were measured on a CBCT scanned model, and time dependent mean differences were analyzed by two-way univariate and Duncan's test (α=.05). For Line 1, the total mean difference of Impregum and Hydrogum were statistically different from Alginoplast ( P <.05), while Zetaflow and Honigum had smaller discrepancies. Alginoplast resulted in more difference than the other impressions ( P <.05). For Line 2, the total mean difference of Impregum was statistically different from the other impressions. Significant differences were observed in Line 1 and Line 2 for the different storage periods ( P <.05). The dimensional accuracy of impression material is clinically acceptable if the impression material is stored in suitable conditions.

  11. Statistics of pressure fluctuations in decaying isotropic turbulence.

    PubMed

    Kalelkar, Chirag

    2006-04-01

    We present results from a systematic direct-numerical simulation study of pressure fluctuations in an unforced, incompressible, homogeneous, and isotropic three-dimensional turbulent fluid. At cascade completion, isosurfaces of low pressure are found to be organized as slender filaments, whereas the predominant isostructures appear sheetlike. We exhibit several results, including plots of probability distributions of the spatial pressure difference, the pressure-gradient norm, and the eigenvalues of the pressure-Hessian tensor. Plots of the temporal evolution of the mean pressure-gradient norm, and the mean eigenvalues of the pressure-Hessian tensor are also exhibited. We find the statistically preferred orientations between the eigenvectors of the pressure-Hessian tensor, the pressure gradient, the eigenvectors of the strain-rate tensor, the vorticity, and the velocity. Statistical properties of the nonlocal part of the pressure-Hessian tensor are also exhibited. We present numerical tests (in the viscous case) of some conjectures of Ohkitani [Phys. Fluids A 5, 2570 (1993)] and Ohkitani and Kishiba [Phys. Fluids 7, 411 (1995)] concerning the pressure-Hessian and the strain-rate tensors, for the unforced, incompressible, three-dimensional Euler equations.

  12. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  13. Low speed maneuvering flight of the rose-breasted cockatoo (Eolophus roseicapillus). II. Inertial and aerodynamic reorientation.

    PubMed

    Hedrick, T L; Usherwood, J R; Biewener, A A

    2007-06-01

    The reconfigurable, flapping wings of birds allow for both inertial and aerodynamic modes of reorientation. We found evidence that both these modes play important roles in the low speed turning flight of the rose-breasted cockatoo Eolophus roseicapillus. Using three-dimensional kinematics recorded from six cockatoos making a 90 degrees turn in a flight corridor, we developed predictions of inertial and aerodynamic reorientation from estimates of wing moments of inertia and flapping arcs, and a blade-element aerodynamic model. The blade-element model successfully predicted weight support (predicted was 88+/-17% of observed, N=6) and centripetal force (predicted was 79+/-29% of observed, N=6) for the maneuvering cockatoos and provided a reasonable estimate of mechanical power. The estimated torque from the model was a significant predictor of roll acceleration (r(2)=0.55, P<0.00001), but greatly overestimated roll magnitude when applied with no roll damping. Non-dimensional roll damping coefficients of approximately -1.5, 2-6 times greater than those typical of airplane flight dynamics (approximately -0.45), were required to bring our estimates of reorientation due to aerodynamic torque back into conjunction with the measured changes in orientation. Our estimates of inertial reorientation were statistically significant predictors of the measured reorientation within wingbeats (r(2) from 0.2 to 0.37, P<0.0005). Components of both our inertial reorientation and aerodynamic torque estimates correlated, significantly, with asymmetries in the activation profile of four flight muscles: the pectoralis, supracoracoideus, biceps brachii and extensor metacarpi radialis (r(2) from 0.27 to 0.45, P<0.005). Thus, avian flight maneuvers rely on production of asymmetries throughout the flight apparatus rather than in a specific set of control or turning muscles.

  14. Lagrangian and Eulerian view of the bursting period

    NASA Astrophysics Data System (ADS)

    Podvin, Bérengère; Gibson, John; Berkooz, Gal; Lumley, John

    1997-02-01

    Low-dimensional models for the turbulent wall layer display an intermittent phenomenon with an ejection phase and a sweep phase that strongly resembles the bursting phenomenon observed in experimental flows. The probability distribution of inter-burst times has the observed shape [E. Stone and P. J. Holmes, Physica D 37, 20 (1989); SIAM J. Appl. Math. 50, 726 (1990); Phys. Lett. A 5, 29 (1991); P. J. Holmes and E. Stone, in Studies in Turbulence, edited by T. B. Gatski, S. Sarkar, and C. G. Speziale (Springer, Heidelberg, 1992)]. However, the time scales both for bursts and interburst durations are unrealistically long, a fact that was not appreciated until recently. We believe that the long time scales are due to the model's inclusion of only a single coherent structure, when in fact a succession of quasi-independent structures are being swept past the sensor in an experiment. A simple statistical model of this situation restores the magnitude of the observed bursting period, although there is a great deal of flexibility in the various parameters involved.

  15. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  16. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  17. Evaluation of a Revised Interplanetary Shock Prediction Model: 1D CESE-HD-2 Solar-Wind Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Du, A. M.; Du, D.; Sun, W.

    2014-08-01

    We modified the one-dimensional conservation element and solution element (CESE) hydrodynamic (HD) model into a new version [ 1D CESE-HD-2], by considering the direction of the shock propagation. The real-time performance of the 1D CESE-HD-2 model during Solar Cycle 23 (February 1997 - December 2006) is investigated and compared with those of the Shock Time of Arrival Model ( STOA), the Interplanetary-Shock-Propagation Model ( ISPM), and the Hakamada-Akasofu-Fry version 2 ( HAFv.2). Of the total of 584 flare events, 173 occurred during the rising phase, 166 events during the maximum phase, and 245 events during the declining phase. The statistical results show that the success rates of the predictions by the 1D CESE-HD-2 model for the rising, maximum, declining, and composite periods are 64 %, 62 %, 57 %, and 61 %, respectively, with a hit window of ± 24 hours. The results demonstrate that the 1D CESE-HD-2 model shows the highest success rates when the background solar-wind speed is relatively fast. Thus, when the background solar-wind speed at the time of shock initiation is enhanced, the forecasts will provide potential values to the customers. A high value (27.08) of χ 2 and low p-value (< 0.0001) for the 1D CESE-HD-2 model give considerable confidence for real-time forecasts by using this new model. Furthermore, the effects of various shock characteristics (initial speed, shock duration, background solar wind, longitude, etc.) and background solar wind on the forecast are also investigated statistically.

  18. Work distributions of one-dimensional fermions and bosons with dual contact interactions

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Zhang, Jingning; Quan, H. T.

    2018-05-01

    We extend the well-known static duality [M. Girardeau, J. Math. Phys. 1, 516 (1960), 10.1063/1.1703687; T. Cheon and T. Shigehara, Phys. Rev. Lett. 82, 2536 (1999), 10.1103/PhysRevLett.82.2536] between one-dimensional (1D) bosons and 1D fermions to the dynamical version. By utilizing this dynamical duality, we find the duality of nonequilibrium work distributions between interacting 1D bosonic (Lieb-Liniger model) and 1D fermionic (Cheon-Shigehara model) systems with dual contact interactions. As a special case, the work distribution of the Tonks-Girardeau gas is identical to that of 1D noninteracting fermionic system even though their momentum distributions are significantly different. In the classical limit, the work distributions of Lieb-Liniger models (Cheon-Shigehara models) with arbitrary coupling strength converge to that of the 1D noninteracting distinguishable particles, although their elementary excitations (quasiparticles) obey different statistics, e.g., the Bose-Einstein, the Fermi-Dirac, and the fractional statistics. We also present numerical results of the work distributions of Lieb-Liniger model with various coupling strengths, which demonstrate the convergence of work distributions in the classical limit.

  19. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  20. Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, paranoid personality disorder diagnosis: a unitary or a two-dimensional construct?

    PubMed

    Falkum, Erik; Pedersen, Geir; Karterud, Sigmund

    2009-01-01

    This article examines reliability and validity aspects of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) paranoid personality disorder (PPD) diagnosis. Patients with personality disorders (n = 930) from the Norwegian network of psychotherapeutic day hospitals, of which 114 had PPD, were included in the study. Frequency distribution, chi(2), correlations, reliability statistics, exploratory, and confirmatory factor analyses were performed. The distribution of PPD criteria revealed no distinct boundary between patients with and without PPD. Diagnostic category membership was obtained in 37 of 64 theoretically possible ways. The PPD criteria formed a separate factor in a principal component analysis, whereas a confirmatory factor analysis indicated that the DSM-IV PPD construct consists of 2 separate dimensions as follows: suspiciousness and hostility. The reliability of the unitary PPD scale was only 0.70, probably partly due to the apparent 2-dimensionality of the construct. Persistent unwarranted doubts about the loyalty of friends had the highest diagnostic efficiency, whereas unwarranted accusations of infidelity of partner had particularly poor indicator properties. The reliability and validity of the unitary PPD construct may be questioned. The 2-dimensional PPD model should be further explored.

  1. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  2. Conformal anomaly of some 2-d Z (n) models

    NASA Astrophysics Data System (ADS)

    William, Peter

    1991-01-01

    We describe a numerical calculation of the conformal anomaly in the case of some two-dimensional statistical models undergoing a second-order phase transition, utilizing a recently developed method to compute the partition function exactly. This computation is carried out on a massively parallel CM2 machine, using the finite size scaling behaviour of the free energy.

  3. Testing Structural Models of DSM-IV Symptoms of Common Forms of Child and Adolescent Psychopathology

    ERIC Educational Resources Information Center

    Lahey, Benjamin B.; Rathouz, Paul J.; Van Hulle, Carol; Urbano, Richard C.; Krueger, Robert F.; Applegate, Brooks; Garriock, Holly A.; Chapman, Derek A.; Waldman, Irwin D.

    2008-01-01

    Confirmatory factor analyses were conducted of "Diagnostic and Statistical Manual of Mental Disorders", Fourth Edition (DSM-IV) symptoms of common mental disorders derived from structured interviews of a representative sample of 4,049 twin children and adolescents and their adult caretakers. A dimensional model based on the assignment of symptoms…

  4. Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals

    DOE PAGES

    Schmittfull, Marcel; Vlah, Zvonimir

    2016-11-28

    Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less

  5. Helicity moduli of three-dimensional dilute XY models

    NASA Astrophysics Data System (ADS)

    Garg, Anupam; Pandit, Rahul; Solla, Sara A.; Ebner, C.

    1984-07-01

    The helicity moduli of various dilute, classical XY models on three-dimensional lattices are studied with a view to understanding some aspects of the superfluidity of 4He in Vycor glass. A spinwave calculation is used to obtain the low-temperature helicity modulus of a regularly-diluted XY model. A similar calculation is performed for the randomly bond-diluted and site-diluted XY models in the limit of low dilution. A Monte Carlo simulation is used to obtain the helicity modulus of the randomly bond-diluted XY model over a wide range of temperature and dilution. It is found that the randomly diluted models do agree and the regularly diluted model does not agree with certain experimentally found features of the variation in superfluid fraction with coverage of 4He in Vycor glass.

  6. Statistics for laminar flamelet modeling

    NASA Technical Reports Server (NTRS)

    Cant, R. S.; Rutland, C. J.; Trouve, A.

    1990-01-01

    Statistical information required to support modeling of turbulent premixed combustion by laminar flamelet methods is extracted from a database of the results of Direct Numerical Simulation of turbulent flames. The simulations were carried out previously by Rutland (1989) using a pseudo-spectral code on a three dimensional mesh of 128 points in each direction. One-step Arrhenius chemistry was employed together with small heat release. A framework for the interpretation of the data is provided by the Bray-Moss-Libby model for the mean turbulent reaction rate. Probability density functions are obtained over surfaces of the constant reaction progress variable for the tangential strain rate and the principal curvature. New insights are gained which will greatly aid the development of modeling approaches.

  7. A statistical model of aggregate fragmentation

    NASA Astrophysics Data System (ADS)

    Spahn, F.; Vieira Neto, E.; Guimarães, A. H. F.; Gorban, A. N.; Brilliantov, N. V.

    2014-01-01

    A statistical model of fragmentation of aggregates is proposed, based on the stochastic propagation of cracks through the body. The propagation rules are formulated on a lattice and mimic two important features of the process—a crack moves against the stress gradient while dissipating energy during its growth. We perform numerical simulations of the model for two-dimensional lattice and reveal that the mass distribution for small- and intermediate-size fragments obeys a power law, F(m)∝m-3/2, in agreement with experimental observations. We develop an analytical theory which explains the detected power law and demonstrate that the overall fragment mass distribution in our model agrees qualitatively with that one observed in experiments.

  8. Computations of Complex Three-Dimensional Turbulent Free Jets

    NASA Technical Reports Server (NTRS)

    Wilson, Robert V.; Demuren, Ayodeji O.

    1997-01-01

    Three-dimensional, incompressible turbulent jets with rectangular and elliptical cross-sections are simulated with a finite-difference numerical method. The full Navier- Stokes equations are solved at low Reynolds numbers, whereas at high Reynolds numbers filtered forms of the equations are solved along with a sub-grid scale model to approximate the effects of the unresolved scales. A 2-N storage, third-order Runge-Kutta scheme is used for temporary discretization and a fourth-order compact scheme is used for spatial discretization. Although such methods are widely used in the simulation of compressible flows, the lack of an evolution equation for pressure or density presents particular difficulty in incompressible flows. The pressure-velocity coupling must be established indirectly. It is achieved, in this study, through a Poisson equation which is solved by a compact scheme of the same order of accuracy. The numerical formulation is validated and the dispersion and dissipation errors are documented by the solution of a wide range of benchmark problems. Three-dimensional computations are performed for different inlet conditions which model the naturally developing and forced jets. The experimentally observed phenomenon of axis-switching is captured in the numerical simulation, and it is confirmed through flow visualization that this is based on self-induction of the vorticity field. Statistical quantities such as mean velocity, mean pressure, two-point velocity spatial correlations and Reynolds stresses are presented. Detailed budgets of the mean momentum and Reynolds stresses are presented. Detailed budgets of the mean momentum and Reynolds stress equations are presented to aid in the turbulence modeling of complex jets. Simulations of circular jets are used to quantify the effect of the non-uniform curvature of the non-circular jets.

  9. Confidence set inference with a prior quadratic bound

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z = (z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y(0)=(y sub 1(0),...,y sub D(0)) knowledge of the statistical distribution of the random errors in y(0). The data space Y containing y(0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x (e.g., energy or dissipation rate), Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. CSI is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface. Neither the heat flow nor the energy bound is strong enough to permit estimation of B(r) at single points on the CMB, but the heat flow bound permits estimation of uniform averages of B(r) over discs on the CMB, and both bounds permit weighted disc-averages with continous weighting kernels. Both bounds also permit estimation of low-degree Gauss coefficients at the CMB. The heat flow bound resolves them up to degree 8 if the crustal field at satellite altitudes must be treated as a systematic error, but can resolve to degree 11 under the most favorable statistical treatment of the crust. These two limits produce circles of confusion on the CMB with diameters of 25 deg and 19 deg respectively.

  10. Parallelizable 3D statistical reconstruction for C-arm tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Wang, Beilei; Barner, Kenneth; Lee, Denny

    2005-04-01

    Clinical diagnosis and security detection tasks increasingly require 3D information which is difficult or impossible to obtain from 2D (two dimensional) radiographs. As a 3D (three dimensional) radiographic and non-destructive imaging technique, digital tomosynthesis is especially fit for cases where 3D information is required while a complete projection data is not available. Nowadays, FBP (filtered back projection) is extensively used in industry for its fast speed and simplicity. However, it is hard to deal with situations where only a limited number of projections from constrained directions are available, or the SNR (signal to noises ratio) of the projections is low. In order to deal with noise and take into account a priori information of the object, a statistical image reconstruction method is described based on the acquisition model of X-ray projections. We formulate a ML (maximum likelihood) function for this model and develop an ordered-subsets iterative algorithm to estimate the unknown attenuation of the object. Simulations show that satisfied results can be obtained after 1 to 2 iterations, and after that there is no significant improvement of the image quality. An adaptive wiener filter is also applied to the reconstructed image to remove its noise. Some approximations to speed up the reconstruction computation are also considered. Applying this method to computer generated projections of a revised Shepp phantom and true projections from diagnostic radiographs of a patient"s hand and mammography images yields reconstructions with impressive quality. Parallel programming is also implemented and tested. The quality of the reconstructed object is conserved, while the computation time is considerably reduced by almost the number of threads used.

  11. Modeling fine-scale geological heterogeneity--examples of sand lenses in tills.

    PubMed

    Kessler, Timo Christian; Comunian, Alessandro; Oriani, Fabio; Renard, Philippe; Nilsson, Bertel; Klint, Knud Erik; Bjerg, Poul Løgstrup

    2013-01-01

    Sand lenses at various spatial scales are recognized to add heterogeneity to glacial sediments. They have high hydraulic conductivities relative to the surrounding till matrix and may affect the advective transport of water and contaminants in clayey till settings. Sand lenses were investigated on till outcrops producing binary images of geological cross-sections capturing the size, shape and distribution of individual features. Sand lenses occur as elongated, anisotropic geobodies that vary in size and extent. Besides, sand lenses show strong non-stationary patterns on section images that hamper subsequent simulation. Transition probability (TP) and multiple-point statistics (MPS) were employed to simulate sand lens heterogeneity. We used one cross-section to parameterize the spatial correlation and a second, parallel section as a reference: it allowed testing the quality of the simulations as a function of the amount of conditioning data under realistic conditions. The performance of the simulations was evaluated on the faithful reproduction of the specific geological structure caused by sand lenses. Multiple-point statistics offer a better reproduction of sand lens geometry. However, two-dimensional training images acquired by outcrop mapping are of limited use to generate three-dimensional realizations with MPS. One can use a technique that consists in splitting the 3D domain into a set of slices in various directions that are sequentially simulated and reassembled into a 3D block. The identification of flow paths through a network of elongated sand lenses and the impact on the equivalent permeability in tills are essential to perform solute transport modeling in the low-permeability sediments. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  12. The Santiago-Harvard-Edinburgh-Durham void comparison - I. SHEDding light on chameleon gravity tests

    NASA Astrophysics Data System (ADS)

    Cautun, Marius; Paillas, Enrique; Cai, Yan-Chuan; Bose, Sownak; Armijo, Joaquin; Li, Baojiu; Padilla, Nelson

    2018-05-01

    We present a systematic comparison of several existing and new void-finding algorithms, focusing on their potential power to test a particular class of modified gravity models - chameleon f(R) gravity. These models deviate from standard general relativity (GR) more strongly in low-density regions and thus voids are a promising venue to test them. We use halo occupation distribution (HOD) prescriptions to populate haloes with galaxies, and tune the HOD parameters such that the galaxy two-point correlation functions are the same in both f(R) and GR models. We identify both three-dimensional (3D) voids and two-dimensional (2D) underdensities in the plane of the sky to find the same void abundance and void galaxy number density profiles across all models, which suggests that they do not contain much information beyond galaxy clustering. However, the underlying void dark matter density profiles are significantly different, with f(R) voids being more underdense than GR ones, which leads to f(R) voids having a larger tangential shear signal than their GR analogues. We investigate the potential of each void finder to test f(R) models with near-future lensing surveys such as EUCLID and LSST. The 2D voids have the largest power to probe f(R) gravity, with an LSST analysis of tunnel (which is a new type of 2D underdensity introduced here) lensing distinguishing at 80 and 11σ (statistical error) f(R) models with parameters, |fR0| = 10-5 and 10-6, from GR.

  13. Benchmarking the mesoscale variability in global ocean eddy-permitting numerical systems

    NASA Astrophysics Data System (ADS)

    Cipollone, Andrea; Masina, Simona; Storto, Andrea; Iovino, Doroteaciro

    2017-10-01

    The role of data assimilation procedures on representing ocean mesoscale variability is assessed by applying eddy statistics to a state-of-the-art global ocean reanalysis (C-GLORS), a free global ocean simulation (performed with the NEMO system) and an observation-based dataset (ARMOR3D) used as an independent benchmark. Numerical results are computed on a 1/4 ∘ horizontal grid (ORCA025) and share the same resolution with ARMOR3D dataset. This "eddy-permitting" resolution is sufficient to allow ocean eddies to form. Further to assessing the eddy statistics from three different datasets, a global three-dimensional eddy detection system is implemented in order to bypass the need of regional-dependent definition of thresholds, typical of commonly adopted eddy detection algorithms. It thus provides full three-dimensional eddy statistics segmenting vertical profiles from local rotational velocities. This criterion is crucial for discerning real eddies from transient surface noise that inevitably affects any two-dimensional algorithm. Data assimilation enhances and corrects mesoscale variability on a wide range of features that cannot be well reproduced otherwise. The free simulation fairly reproduces eddies emerging from western boundary currents and deep baroclinic instabilities, while underestimates shallower vortexes that populate the full basin. The ocean reanalysis recovers most of the missing turbulence, shown by satellite products , that is not generated by the model itself and consistently projects surface variability deep into the water column. The comparison with the statistically reconstructed vertical profiles from ARMOR3D show that ocean data assimilation is able to embed variability into the model dynamics, constraining eddies with in situ and altimetry observation and generating them consistently with local environment.

  14. Data-based Non-Markovian Model Inference

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close collaboration with M.D. Chekroun, D. Kondrashov, S. Kravtsov and A.W. Robertson.

  15. Nonequilibrium Statistical Mechanics in One Dimension

    NASA Astrophysics Data System (ADS)

    Privman, Vladimir

    2005-08-01

    Part I. Reaction-Diffusion Systems and Models of Catalysis; 1. Scaling theories of diffusion-controlled and ballistically-controlled bimolecular reactions S. Redner; 2. The coalescence process, A+A->A, and the method of interparticle distribution functions D. ben-Avraham; 3. Critical phenomena at absorbing states R. Dickman; Part II. Kinetic Ising Models; 4. Kinetic ising models with competing dynamics: mappings, correlations, steady states, and phase transitions Z. Racz; 5. Glauber dynamics of the ising model N. Ito; 6. 1D Kinetic ising models at low temperatures - critical dynamics, domain growth, and freezing S. Cornell; Part III. Ordering, Coagulation, Phase Separation; 7. Phase-ordering dynamics in one dimension A. J. Bray; 8. Phase separation, cluster growth, and reaction kinetics in models with synchronous dynamics V. Privman; 9. Stochastic models of aggregation with injection H. Takayasu and M. Takayasu; Part IV. Random Sequential Adsorption and Relaxation Processes; 10. Random and cooperative sequential adsorption: exactly solvable problems on 1D lattices, continuum limits, and 2D extensions J. W. Evans; 11. Lattice models of irreversible adsorption and diffusion P. Nielaba; 12. Deposition-evaporation dynamics: jamming, conservation laws and dynamical diversity M. Barma; Part V. Fluctuations In Particle and Surface Systems; 13. Microscopic models of macroscopic shocks S. A. Janowsky and J. L. Lebowitz; 14. The asymmetric exclusion model: exact results through a matrix approach B. Derrida and M. R. Evans; 15. Nonequilibrium surface dynamics with volume conservation J. Krug; 16. Directed walks models of polymers and wetting J. Yeomans; Part VI. Diffusion and Transport In One Dimension; 17. Some recent exact solutions of the Fokker-Planck equation H. L. Frisch; 18. Random walks, resonance, and ratchets C. R. Doering and T. C. Elston; 19. One-dimensional random walks in random environment K. Ziegler; Part VII. Experimental Results; 20. Diffusion-limited exciton kinetics in one-dimensional systems R. Kroon and R. Sprik; 21. Experimental investigations of molecular and excitonic elementary reaction kinetics in one-dimensional systems R. Kopelman and A. L. Lin; 22. Luminescence quenching as a probe of particle distribution S. H. Bossmann and L. S. Schulman; Index.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  17. Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Siegel, Stefan G.; Seidel, J.?Rgen; Fagley, Casey; Luchtenburg, D. M.; Cohen, Kelly; McLaughlin, Thomas

    For the systematic development of feedback flow controllers, a numerical model that captures the dynamic behaviour of the flow field to be controlled is required. This poses a particular challenge for flow fields where the dynamic behaviour is nonlinear, and the governing equations cannot easily be solved in closed form. This has led to many versions of low-dimensional modelling techniques, which we extend in this work to represent better the impact of actuation on the flow. For the benchmark problem of a circular cylinder wake in the laminar regime, we introduce a novel extension to the proper orthogonal decomposition (POD) procedure that facilitates mode construction from transient data sets. We demonstrate the performance of this new decomposition by applying it to a data set from the development of the limit cycle oscillation of a circular cylinder wake simulation as well as an ensemble of transient forced simulation results. The modes obtained from this decomposition, which we refer to as the double POD (DPOD) method, correctly track the changes of the spatial modes both during the evolution of the limit cycle and when forcing is applied by transverse translation of the cylinder. The mode amplitudes, which are obtained by projecting the original data sets onto the truncated DPOD modes, can be used to construct a dynamic mathematical model of the wake that accurately predicts the wake flow dynamics within the lock-in region at low forcing amplitudes. This low-dimensional model, derived using nonlinear artificial neural network based system identification methods, is robust and accurate and can be used to simulate the dynamic behaviour of the wake flow. We demonstrate this ability not just for unforced and open-loop forced data, but also for a feedback-controlled simulation that leads to a 90% reduction in lift fluctuations. This indicates the possibility of constructing accurate dynamic low-dimensional models for feedback control by using unforced and transient forced data only.

  18. Multivariate neural biomarkers of emotional states are categorically distinct

    PubMed Central

    Kragel, Philip A.

    2015-01-01

    Understanding how emotions are represented neurally is a central aim of affective neuroscience. Despite decades of neuroimaging efforts addressing this question, it remains unclear whether emotions are represented as distinct entities, as predicted by categorical theories, or are constructed from a smaller set of underlying factors, as predicted by dimensional accounts. Here, we capitalize on multivariate statistical approaches and computational modeling to directly evaluate these theoretical perspectives. We elicited discrete emotional states using music and films during functional magnetic resonance imaging scanning. Distinct patterns of neural activation predicted the emotion category of stimuli and tracked subjective experience. Bayesian model comparison revealed that combining dimensional and categorical models of emotion best characterized the information content of activation patterns. Surprisingly, categorical and dimensional aspects of emotion experience captured unique and opposing sources of neural information. These results indicate that diverse emotional states are poorly differentiated by simple models of valence and arousal, and that activity within separable neural systems can be mapped to unique emotion categories. PMID:25813790

  19. Bayesian depth estimation from monocular natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  20. Modeling failure in brittle porous ceramics

    NASA Astrophysics Data System (ADS)

    Keles, Ozgur

    Brittle porous materials (BPMs) are used for battery, fuel cell, catalyst, membrane, filter, bone graft, and pharmacy applications due to the multi-functionality of their underlying porosity. However, in spite of its technological benefits the effects of porosity on BPM fracture strength and Weibull statistics are not fully understood--limiting a wider use. In this context, classical fracture mechanics was combined with two-dimensional finite element simulations not only to account for pore-pore stress interactions, but also to numerically quantify the relationship between the local pore volume fraction and fracture statistics. Simulations show that even the microstructures with the same porosity level and size of pores differ substantially in fracture strength. The maximum reliability of BPMs was shown to be limited by the underlying pore--pore interactions. Fracture strength of BMPs decreases at a faster rate under biaxial loading than under uniaxial loading. Three different types of deviation from classic Weibull behavior are identified: P-type corresponding to a positive lower tail deviation, N-type corresponding to a negative lower tail deviation, and S-type corresponding to both positive upper and lower tail deviations. Pore-pore interactions result in either P-type or N-type deviation in the limit of low porosity, whereas S-type behavior occurs when clusters of low and high fracture strengths coexist in a fracture data.

  1. Studies in the use of cloud type statistics in mission simulation

    NASA Technical Reports Server (NTRS)

    Fowler, M. G.; Willand, J. H.; Chang, D. T.; Cogan, J. L.

    1974-01-01

    A study to further improve NASA's global cloud statistics for mission simulation is reported. Regional homogeneity in cloud types was examined; most of the original region boundaries defined for cloud cover amount in previous studies were supported by the statistics on cloud types and the number of cloud layers. Conditionality in cloud statistics was also examined with special emphasis on temporal and spatial dependencies, and cloud type interdependence. Temporal conditionality was found up to 12 hours, and spatial conditionality up to 200 miles; the diurnal cycle in convective cloudiness was clearly evident. As expected, the joint occurrence of different cloud types reflected the dynamic processes which form the clouds. Other phases of the study improved the cloud type statistics for several region and proposed a mission simulation scheme combining the 4-dimensional atmospheric model, sponsored by MSFC, with the global cloud model.

  2. Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems

    NASA Astrophysics Data System (ADS)

    Vanchurin, Vitaly

    2018-05-01

    Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.

  3. Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems

    NASA Astrophysics Data System (ADS)

    Vanchurin, Vitaly

    2018-06-01

    Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.

  4. Computational statistics using the Bayesian Inference Engine

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.

    2013-09-01

    This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.

  5. Data-driven reduced order models for effective yield strength and partitioning of strain in multiphase materials

    NASA Astrophysics Data System (ADS)

    Latypov, Marat I.; Kalidindi, Surya R.

    2017-10-01

    There is a critical need for the development and verification of practically useful multiscale modeling strategies for simulating the mechanical response of multiphase metallic materials with heterogeneous microstructures. In this contribution, we present data-driven reduced order models for effective yield strength and strain partitioning in such microstructures. These models are built employing the recently developed framework of Materials Knowledge Systems that employ 2-point spatial correlations (or 2-point statistics) for the quantification of the heterostructures and principal component analyses for their low-dimensional representation. The models are calibrated to a large collection of finite element (FE) results obtained for a diverse range of microstructures with various sizes, shapes, and volume fractions of the phases. The performance of the models is evaluated by comparing the predictions of yield strength and strain partitioning in two-phase materials with the corresponding predictions from a classical self-consistent model as well as results of full-field FE simulations. The reduced-order models developed in this work show an excellent combination of accuracy and computational efficiency, and therefore present an important advance towards computationally efficient microstructure-sensitive multiscale modeling frameworks.

  6. Global modeling of soil evaporation efficiency for a chosen soil type

    NASA Astrophysics Data System (ADS)

    Georgiana Stefan, Vivien; Mangiarotti, Sylvain; Merlin, Olivier; Chanzy, André

    2016-04-01

    One way of reproducing the dynamics of a system is by deriving a set of differential, difference or discrete equations directly from observational time series. A method for obtaining such a system is the global modeling technique [1]. The approach is here applied to the dynamics of soil evaporative efficiency (SEE), defined as the ratio of actual to potential evaporation. SEE is an interesting variable to study since it is directly linked to soil evaporation (LE) which plays an important role in the water cycle and since it can be easily derived from satellite measurements. One goal of the present work is to get a semi-empirical parameter that could account for the variety of the SEE dynamical behaviors resulting from different soil properties. Before trying to obtain such a semi-empirical parameter with the global modeling technique, it is first necessary to prove that this technique can be applied to the dynamics of SEE without any a priori information. The global modeling technique is thus applied here to a synthetic series of SEE, reconstructed from the TEC (Transfert Eau Chaleur) model [2]. It is found that an autonomous chaotic model can be retrieved for the dynamics of SEE. The obtained model is four-dimensional and exhibits a complex behavior. The comparison of the original and the model phase portraits shows a very good consistency that proves that the original dynamical behavior is well described by the model. To evaluate the model accuracy, the forecasting error growth is estimated. To get a robust estimate of this error growth, the forecasting error is computed for prediction horizons of 0 to 9 hours, starting from different initial conditions and statistics of the error growth are thus performed. Results show that, for a maximum error level of 40% of the signal variance, the horizon of predictability is close to 3 hours, approximately one third of the diurnal part of day. These results are interesting for various reasons. To the best of our knowledge, it is the very first time that a chaotic model is obtained for the SEE. It also shows that the SEE dynamics can be approximated by a low-dimensional autonomous model. From a theoretical point of view, it is also interesting to note that only very few low-dimensional models could be directly obtained for environmental dynamics, and that four-dimensional models are even rarer. Since a model could be obtained for the SEE, it can be expected, now, to adapt the global modeling technique and to apply it to a range of different soil conditions in order to get a global model that would account for the variability of soil properties. [1] MANGIAROTTI S., COUDRET R., DRAPEAU L., JARLAN L. Polynomial search and global modeling: two algorithms for modeling chaos. Physical Review E, 86(4), 046205, 2012. [2] CHANZY A., MUMEN M., RICHARD G. Accuracy of the top soil moisture simulation using a mechanistic model with limited soil characterization. Water Resources Research, 44, W03432, 2008.

  7. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  8. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  9. Constructing the Hydrogeological Model of the Choushuichi Fan-delta in Central Taiwan with the Electrical Resistivity Measurements

    NASA Astrophysics Data System (ADS)

    Chang, P.; Chang, L.; Chen, W.; Chiang, C.

    2012-12-01

    In the study we used the resistivity measurements across the Choushuichi Fan-delta to establish a three-dimensional hydrogeological model. The resistivity measurements includes the half-Schlumberger surveys conducted during the year of 1990-2000 across the entire fan-delta area, and the two-dimensional resistivity data collected recently for the purpose of characterizing the recharge zone boundaries between the upper-fan gravels and the lower-fan clayey sediments. Core records from the monitoring wells in the area were used for the training data to help determining the resistivity ranges of the gavel, sand, and muddy sediments in the fan-delta. The resistivity measurements were inverted and converted into 1-D data form and interpolated for rendering a three dimensional resistivity volume that represents the general resistivity distribution in the Choushuichi fan-delta. We categorize the hydrogeological materials into gravels, sands, and clayey sediments with the resistivity ranges from the previous statistical analysis. Hence we are able to quickly construct a three-dimensional hydrogeological model with simple three materials.

  10. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors

    NASA Astrophysics Data System (ADS)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  11. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    PubMed

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  12. A numerical solution for thermoacoustic convection of fluids in low gravity

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Bourgeois, S. V., Jr.; Fan, C.; Grodzka, P. G.

    1973-01-01

    A finite difference numerical technique for solving the differential equations which describe thermal convection of compressible fluids in low gravity are reported. Results of one-dimensional calculations are presented, and comparisons are made to previous solutions. The primary result presented is a one-dimensional radial model of the Apollo 14 heat flow and convection demonstration flight experiment. The numerical calculations show that thermally induced convective motion in a confined fluid can have significant effects on heat transfer in a low gravity environment.

  13. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  14. A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  15. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  16. Quantifying two-dimensional nonstationary signal with power-law correlations by detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Fan, Qingju; Wu, Yonghong

    2015-08-01

    In this paper, we develop a new method for the multifractal characterization of two-dimensional nonstationary signal, which is based on the detrended fluctuation analysis (DFA). By applying to two artificially generated signals of two-component ARFIMA process and binomial multifractal model, we show that the new method can reliably determine the multifractal scaling behavior of two-dimensional signal. We also illustrate the applications of this method in finance and physiology. The analyzing results exhibit that the two-dimensional signals under investigation are power-law correlations, and the electricity market consists of electricity price and trading volume is multifractal, while the two-dimensional EEG signal in sleep recorded for a single patient is weak multifractal. The new method based on the detrended fluctuation analysis may add diagnostic power to existing statistical methods.

  17. The problem of dimensional instability in airfoil models for cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Wigley, D. A.

    1982-01-01

    The problem of dimensional instability in airfoil models for cryogenic wind tunnels is discussed in terms of the various mechanisms that can be responsible. The interrelationship between metallurgical structure and possible dimensional instability in cryogenic usage is discussed for those steel alloys of most interest for wind tunnel model construction at this time. Other basic mechanisms responsible for setting up residual stress systems are discussed, together with ways in which their magnitude may be reduced by various elevated or low temperature thermal cycles. A standard specimen configuration is proposed for use in experimental investigations into the effects of machining, heat treatment, and other variables that influence the dimensional stability of the materials of interest. A brief classification of various materials in terms of their metallurgical structure and susceptability to dimensional instability is presented.

  18. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  19. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods

    DOE PAGES

    James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe; ...

    2018-02-26

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symme-tries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one andmore » two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1+1D quantum chro-modynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. Lastly, we describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.« less

  20. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symme-tries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one andmore » two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1+1D quantum chro-modynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. Lastly, we describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.« less

  1. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods

    NASA Astrophysics Data System (ADS)

    James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe; Robinson, Neil J.; Tsvelik, Alexei M.

    2018-04-01

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb–Liniger model, 1  +  1D quantum chromodynamics, as well as Landau–Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  2. New developments in the theoretical treatment of low dimensional strongly correlated systems.

    PubMed

    James, Andrew J A; Konik, Robert M; Lecheminant, Philippe; Robinson, Neil; Tsvelik, Alexei M

    2017-10-09

    We review two important non-perturbative approaches for extracting the physics of low- dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of confor- mal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symme- tries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1+1D quantum chro- modynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics. © 2017 IOP Publishing Ltd.

  3. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods.

    PubMed

    James, Andrew J A; Konik, Robert M; Lecheminant, Philippe; Robinson, Neil J; Tsvelik, Alexei M

    2018-02-26

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1  +  1D quantum chromodynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  4. Wake Management Strategies for Reduction of Turbomachinery Fan Noise

    NASA Technical Reports Server (NTRS)

    Waitz, Ian A.

    1998-01-01

    The primary objective of our work was to evaluate and test several wake management schemes for the reduction of turbomachinery fan noise. Throughout the course of this work we relied on several tools. These include 1) Two-dimensional steady boundary-layer and wake analyses using MISES (a thin-shear layer Navier-Stokes code), 2) Two-dimensional unsteady wake-stator interaction simulations using UNSFLO, 3) Three-dimensional, steady Navier-Stokes rotor simulations using NEWT, 4) Internal blade passage design using quasi-one-dimensional passage flow models developed at MIT, 5) Acoustic modeling using LINSUB, 6) Acoustic modeling using VO72, 7) Experiments in a low-speed cascade wind-tunnel, and 8) ADP fan rig tests in the MIT Blowdown Compressor.

  5. Peculiar spectral statistics of ensembles of trees and star-like graphs

    DOE PAGES

    Kovaleva, V.; Maximov, Yu; Nechaev, S.; ...

    2017-07-11

    In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less

  6. Peculiar spectral statistics of ensembles of trees and star-like graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovaleva, V.; Maximov, Yu; Nechaev, S.

    In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less

  7. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    PubMed

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  8. A note on two-dimensional asymptotic magnetotail equilibria

    NASA Technical Reports Server (NTRS)

    Voigt, Gerd-Hannes; Moore, Brian D.

    1994-01-01

    In order to understand, on the fluid level, the structure, the time evolution, and the stability of current sheets, such as the magnetotail plasma sheet in Earth's magnetosphere, one has to consider magnetic field configurations that are in magnetohydrodynamic (MHD) force equilibrium. Any reasonable MHD current sheet model has to be two-dimensional, at least in an asymptotic sense (B(sub z)/B (sub x)) = epsilon much less than 1. The necessary two-dimensionality is described by a rather arbitrary function f(x). We utilize the free function f(x) to construct two-dimensional magnetotail equilibria are 'equivalent' to current sheets in empirical three-dimensional models. We obtain a class of asymptotic magnetotail equilibria ordered with respect to the magnetic disturbance index Kp. For low Kp values the two-dimensional MHD equilibria reflect some of the realistic, observation-based, aspects of three-dimensional models. For high Kp values the three-dimensional models do not fit the asymptotic MHD equlibria, which is indicative of their inconsistency with the assumed pressure function. This, in turn, implies that high magnetic activity levels of the real magnetosphere might be ruled by thermodynamic conditions different from local thermodynamic equilibrium.

  9. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  10. Spin flip statistics and spin wave interference patterns in Ising ferromagnetic films: A Monte Carlo study.

    PubMed

    Acharyya, Muktish

    2017-07-01

    The spin wave interference is studied in two dimensional Ising ferromagnet driven by two coherent spherical magnetic field waves by Monte Carlo simulation. The spin waves are found to propagate and interfere according to the classic rule of interference pattern generated by two point sources. The interference pattern of spin wave is observed in one boundary of the lattice. The interference pattern is detected and studied by spin flip statistics at high and low temperatures. The destructive interference is manifested as the large number of spin flips and vice versa.

  11. Model of chiral spin liquids with Abelian and non-Abelian topological phases

    DOE PAGES

    Chen, Jyong-Hao; Mudry, Christopher; Chamon, Claudio; ...

    2017-12-15

    In this article, we present a two-dimensional lattice model for quantum spin-1/2 for which the low-energy limit is governed by four flavors of strongly interacting Majorana fermions. We study this low-energy effective theory using two alternative approaches. The first consists of a mean-field approximation. The second consists of a random phase approximation (RPA) for the single-particle Green's functions of the Majorana fermions built from their exact forms in a certain one-dimensional limit. The resulting phase diagram consists of two competing chiral phases, one with Abelian and the other with non-Abelian topological order, separated by a continuous phase transition. Remarkably, themore » Majorana fermions propagate in the two-dimensional bulk, as in the Kitaev model for a spin liquid on the honeycomb lattice. We identify the vison fields, which are mobile (they are static in the Kitaev model) domain walls propagating along only one of the two space directions.« less

  12. Model of chiral spin liquids with Abelian and non-Abelian topological phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jyong-Hao; Mudry, Christopher; Chamon, Claudio

    In this article, we present a two-dimensional lattice model for quantum spin-1/2 for which the low-energy limit is governed by four flavors of strongly interacting Majorana fermions. We study this low-energy effective theory using two alternative approaches. The first consists of a mean-field approximation. The second consists of a random phase approximation (RPA) for the single-particle Green's functions of the Majorana fermions built from their exact forms in a certain one-dimensional limit. The resulting phase diagram consists of two competing chiral phases, one with Abelian and the other with non-Abelian topological order, separated by a continuous phase transition. Remarkably, themore » Majorana fermions propagate in the two-dimensional bulk, as in the Kitaev model for a spin liquid on the honeycomb lattice. We identify the vison fields, which are mobile (they are static in the Kitaev model) domain walls propagating along only one of the two space directions.« less

  13. Analysing black phosphorus transistors using an analytic Schottky barrier MOSFET model.

    PubMed

    Penumatcha, Ashish V; Salazar, Ramon B; Appenzeller, Joerg

    2015-11-13

    Owing to the difficulties associated with substitutional doping of low-dimensional nanomaterials, most field-effect transistors built from carbon nanotubes, two-dimensional crystals and other low-dimensional channels are Schottky barrier MOSFETs (metal-oxide-semiconductor field-effect transistors). The transmission through a Schottky barrier-MOSFET is dominated by the gate-dependent transmission through the Schottky barriers at the metal-to-channel interfaces. This makes the use of conventional transistor models highly inappropriate and has lead researchers in the past frequently to extract incorrect intrinsic properties, for example, mobility, for many novel nanomaterials. Here we propose a simple modelling approach to quantitatively describe the transfer characteristics of Schottky barrier-MOSFETs from ultra-thin body materials accurately in the device off-state. In particular, after validating the model through the analysis of a set of ultra-thin silicon field-effect transistor data, we have successfully applied our approach to extract Schottky barrier heights for electrons and holes in black phosphorus devices for a large range of body thicknesses.

  14. Analysing black phosphorus transistors using an analytic Schottky barrier MOSFET model

    PubMed Central

    Penumatcha, Ashish V.; Salazar, Ramon B.; Appenzeller, Joerg

    2015-01-01

    Owing to the difficulties associated with substitutional doping of low-dimensional nanomaterials, most field-effect transistors built from carbon nanotubes, two-dimensional crystals and other low-dimensional channels are Schottky barrier MOSFETs (metal-oxide-semiconductor field-effect transistors). The transmission through a Schottky barrier-MOSFET is dominated by the gate-dependent transmission through the Schottky barriers at the metal-to-channel interfaces. This makes the use of conventional transistor models highly inappropriate and has lead researchers in the past frequently to extract incorrect intrinsic properties, for example, mobility, for many novel nanomaterials. Here we propose a simple modelling approach to quantitatively describe the transfer characteristics of Schottky barrier-MOSFETs from ultra-thin body materials accurately in the device off-state. In particular, after validating the model through the analysis of a set of ultra-thin silicon field-effect transistor data, we have successfully applied our approach to extract Schottky barrier heights for electrons and holes in black phosphorus devices for a large range of body thicknesses. PMID:26563458

  15. A detailed view on Model-Based Multifactor Dimensionality Reduction for detecting gene-gene interactions in case-control data in the absence and presence of noise

    PubMed Central

    CATTAERT, TOM; CALLE, M. LUZ; DUDEK, SCOTT M.; MAHACHIE JOHN, JESTINAH M.; VAN LISHOUT, FRANÇOIS; URREA, VICTOR; RITCHIE, MARYLYN D.; VAN STEEN, KRISTEL

    2010-01-01

    SUMMARY Analyzing the combined effects of genes and/or environmental factors on the development of complex diseases is a great challenge from both the statistical and computational perspective, even using a relatively small number of genetic and non-genetic exposures. Several data mining methods have been proposed for interaction analysis, among them, the Multifactor Dimensionality Reduction Method (MDR), which has proven its utility in a variety of theoretical and practical settings. Model-Based Multifactor Dimensionality Reduction (MB-MDR), a relatively new MDR-based technique that is able to unify the best of both non-parametric and parametric worlds, was developed to address some of the remaining concerns that go along with an MDR-analysis. These include the restriction to univariate, dichotomous traits, the absence of flexible ways to adjust for lower-order effects and important confounders, and the difficulty to highlight epistasis effects when too many multi-locus genotype cells are pooled into two new genotype groups. Whereas the true value of MB-MDR can only reveal itself by extensive applications of the method in a variety of real-life scenarios, here we investigate the empirical power of MB-MDR to detect gene-gene interactions in the absence of any noise and in the presence of genotyping error, missing data, phenocopy, and genetic heterogeneity. For the considered simulation settings, we show that the power is generally higher for MB-MDR than for MDR, in particular in the presence of genetic heterogeneity, phenocopy, or low minor allele frequencies. PMID:21158747

  16. Measurement and data processing approach for detecting anisotropic spatial statistics of the turbulence-induced index of refraction fluctuations in the upper atmosphere.

    PubMed

    Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John

    2002-05-20

    We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.

  17. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  18. Direct simulation of a self-similar plane wake

    NASA Technical Reports Server (NTRS)

    Moser, Robert D.; Rogers, Michael M.

    1994-01-01

    Direct simulations of two time-developing turbulent wakes have been performed. Initial conditions for the simulations were obtained from two realizations of a direct simulation of a turbulent boundary layer at momentum thickness Reynolds number 670. In addition, extra two dimensional disturbances were added in one of the cases to mimic two dimensional forcing. The unforced wake is allowed to evolve long enough to attain self similarity. The mass-flux Reynolds number (equivalent to the momentum thickness Reynolds number in spatially developing wakes) is 2000, which is high enough for a short k(exp -5/3) range to be evident in the streamwise one dimensional velocity spectrum. Several turbulence statistics have been computed by averaging in space and over the self-similar period in time. The growth rate in the unforced flow is low compared to experiments, but when this growth-rate difference is accounted for, the statistics of the unforced case are in reasonable agreement with experiments. However, the forced case is significantly different. The growth rate, turbulence Reynolds number, and turbulence intensities are as much as ten times larger in the forced case. In addition, the forced flow exhibits large-scale structures similar to those observed in transitional wakes, while the unforced flow does not.

  19. Analysis and generation of groundwater concentration time series

    NASA Astrophysics Data System (ADS)

    Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae

    2018-01-01

    Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.

  20. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  1. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  2. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  3. Prevalence and determinants of burnout Syndrome and Depression among medical students at Sultan Qaboos University: A cross-sectional analytical study from Oman.

    PubMed

    Al-Alawi, Mohammed; Al-Sinawi, Hamed; Al-Qubtan, Ali; Al-Lawati, Jaber; Al-Habsi, Assad; Al-Shuraiqi, Mohammed; Al-Adawi, Samir; Panchatcharam, Sathiya Murthi

    2017-11-08

    This study investigated the prevalence and determinants of Burnout Syndrome and Depressive Symptoms among medical students in Oman. Then, it explored whether the three-dimensional aspects of Burnout Syndrome (High Emotional Exhaustion, High Cynicism and Low Academic Efficacy) would predict the presence of Depressive Symptoms in a logistic regression model. A cross-sectional study was conducted among a random sample of medical students of Sultan Qaboos University. 662 students participated in the study with a response rate of 98%. The prevalence of Burnout Syndrome and Depressive Symptoms were; 7.4% and 24.5% respectively. Preclinical students reported high levels of both Burnout Syndrome (Odds Ratio-OR 2.83, 95% Confidence Interval CI 1.45-5.54) and Depressive Symptoms (OR 2. 72, 95% CI 1.07-6.89). The three-dimensional aspects of Burnout Syndrome(High Emotional Exhaustion, High Cynicism, low Professional efficacy) were statistically significant predictors of the presence of Depressive Symptoms; OR 3.52 (95% CI: 2.21-5.60), OR 3.33 (95% CI:2.10-5.28) and OR 2.07(95%CI:1.32-3.24) respectively. This study indicates that Burnout Syndrome and Depressive Symptoms are common among medical students, particularly in preclinical grade. Furthermore, the presence of high occupational burnout elevates the risk of depression.

  4. A new approach for assimilation of two-dimensional radar precipitation in a high resolution NWP model

    NASA Astrophysics Data System (ADS)

    Korsholm, Ulrik; Petersen, Claus; Hansen Sass, Bent; Woetman, Niels; Getreuer Jensen, David; Olsen, Bjarke Tobias; GIll, Rasphal; Vedel, Henrik

    2014-05-01

    The DMI nowcasting system has been running in a pre-operational state for the past year. The system consists of hourly simulations with the High Resolution Limited Area weather model combined with surface and three-dimensional variational assimilation at each restart and nudging of satellite cloud products and radar precipitation. Nudging of a two-dimensional radar reflectivity CAPPI product is achieved using a new method where low level horizontal divergence is nudged towards pseudo observations. Pseudo observations are calculated based on an assumed relation between divergence and precipitation rate and the strength of the nudging is proportional to the offset between observed and modelled precipitation leading to increased moisture convergence below cloud base if there is an under-production of precipitation relative to the CAPPI product. If the model over-predicts precipitation, the low level moisture source is reduced, and in-cloud moisture is nudged towards environmental values. In this talk results will be discussed based on calculation of the fractions skill score in cases with heavy precipitation over Denmark. Furthermore, results from simulations combining reflectivity nudging and extrapolation of reflectivity will be shown. Results indicate that the new method leads to fast adjustment of the dynamical state of the model to facilitate precipitation release when the model precipitation intensity is too low. Removal of precipitation is also shown to be of importance and strong improvements were found in the position of the precipitation systems. Bias is reduced for low and extreme precipitation rates.

  5. Probing features in the primordial perturbation spectrum with large-scale structure data

    NASA Astrophysics Data System (ADS)

    L'Huillier, Benjamin; Shafieloo, Arman; Hazra, Dhiraj Kumar; Smoot, George F.; Starobinsky, Alexei A.

    2018-06-01

    The form of the primordial power spectrum (PPS) of cosmological scalar (matter density) perturbations is not yet constrained satisfactorily in spite of the tremendous amount of information from the Cosmic Microwave Background (CMB) data. While a smooth power-law-like form of the PPS is consistent with the CMB data, some PPSs with small non-smooth features at large scales can also fit the CMB temperature and polarization data with similar statistical evidence. Future CMB surveys cannot help distinguish all such models due to the cosmic variance at large angular scales. In this paper, we study how well we can differentiate between such featured forms of the PPS not otherwise distinguishable using CMB data. We ran 15 N-body DESI-like simulations of these models to explore this approach. Showing that statistics such as the halo mass function and the two-point correlation function are not able to distinguish these models in a DESI-like survey, we advocate to avoid reducing the dimensionality of the problem by demonstrating that the use of a simple three-dimensional count-in-cell density field can be much more effective for the purpose of model distinction.

  6. A model for two-dimensional bursty turbulence in magnetized plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Servidio, Sergio; Primavera, Leonardo; Carbone, Vincenzo

    2008-01-15

    The nonlinear dynamics of two-dimensional electrostatic interchange modes in a magnetized plasma is investigated through a simple model that replaces the instability mechanism due to magnetic field curvature by an external source of vorticity and mass. Simulations in a cylindrical domain, with a spatially localized and randomized source at the center of the domain, reveal the eruption of mushroom-shaped bursts that propagate radially and are absorbed by the boundaries. Burst sizes and the interburst waiting times exhibit power-law statistics, which indicates long-range interburst correlations, similar to what has been found in sandpile models for avalanching systems. It is shown frommore » the simulations that the dynamics can be characterized by a Yaglom relation for the third-order mixed moment involving the particle number density as a passive scalar and the ExB drift velocity, and hence that the burst phenomenology can be described within the framework of turbulence theory. Statistical features are qualitatively in agreement with experiments of intermittent transport at the edge of plasma devices, and suggest that essential features such as transport can be described by this simple model of bursty turbulence.« less

  7. Fulde-Ferrell-Larkin-Ovchinnikov correlation and free fluids in the one-dimensional attractive Hubbard model

    NASA Astrophysics Data System (ADS)

    Cheng, Song; Yu, Yi-Cong; Batchelor, M. T.; Guan, Xi-Wen

    2018-03-01

    In this Rapid Communication, we show that low-energy macroscopic properties of the one-dimensional (1D) attractive Hubbard model exhibit two fluids of bound pairs and of unpaired fermions. Using the thermodynamic Bethe ansatz equations of the model, we first determine the low-temperature phase diagram and analytically calculate the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) pairing correlation function for the partially polarized phase. We then show that for such an FFLO-like state in the low-density regime the effective chemical potentials of bound pairs and unpaired fermions behave like two free fluids. Consequently, the susceptibility, compressibility, and specific heat obey simple additivity rules, indicating the "free" particle nature of interacting fermions on a 1D lattice. In contrast to the continuum Fermi gases, the correlation critical exponents and thermodynamics of the attractive Hubbard model essentially depend on two lattice interacting parameters. Finally, we study scaling functions, the Wilson ratio and susceptibility, which provide universal macroscopic properties and dimensionless constants of interacting fermions at low energy.

  8. Biosignature Discovery for Substance Use Disorders Using Statistical Learning.

    PubMed

    Baurley, James W; McMahan, Christopher S; Ervin, Carolyn M; Pardamean, Bens; Bergen, Andrew W

    2018-02-01

    There are limited biomarkers for substance use disorders (SUDs). Traditional statistical approaches are identifying simple biomarkers in large samples, but clinical use cases are still being established. High-throughput clinical, imaging, and 'omic' technologies are generating data from SUD studies and may lead to more sophisticated and clinically useful models. However, analytic strategies suited for high-dimensional data are not regularly used. We review strategies for identifying biomarkers and biosignatures from high-dimensional data types. Focusing on penalized regression and Bayesian approaches, we address how to leverage evidence from existing studies and knowledge bases, using nicotine metabolism as an example. We posit that big data and machine learning approaches will considerably advance SUD biomarker discovery. However, translation to clinical practice, will require integrated scientific efforts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. How ions affect the structure of water.

    PubMed

    Hribar, Barbara; Southall, Noel T; Vlachy, Vojko; Dill, Ken A

    2002-10-16

    We model ion solvation in water. We use the MB model of water, a simple two-dimensional statistical mechanical model in which waters are represented as Lennard-Jones disks having Gaussian hydrogen-bonding arms. We introduce a charge dipole into MB waters. We perform (NPT) Monte Carlo simulations to explore how water molecules are organized around ions and around nonpolar solutes in salt solutions. The model gives good qualitative agreement with experiments, including Jones-Dole viscosity B coefficients, Samoilov and Hirata ion hydration activation energies, ion solvation thermodynamics, and Setschenow coefficients for Hofmeister series ions, which describe the salt concentration dependence of the solubilities of hydrophobic solutes. The two main ideas captured here are (1) that charge densities govern the interactions of ions with water, and (2) that a balance of forces determines water structure: electrostatics (water's dipole interacting with ions) and hydrogen bonding (water interacting with neighboring waters). Small ions (kosmotropes) have high charge densities so they cause strong electrostatic ordering of nearby waters, breaking hydrogen bonds. In contrast, large ions (chaotropes) have low charge densities, and surrounding water molecules are largely hydrogen bonded.

  10. Structure-Specific Statistical Mapping of White Matter Tracts

    PubMed Central

    Yushkevich, Paul A.; Zhang, Hui; Simon, Tony; Gee, James C.

    2008-01-01

    We present a new model-based framework for the statistical analysis of diffusion imaging data associated with specific white matter tracts. The framework takes advantage of the fact that several of the major white matter tracts are thin sheet-like structures that can be effectively modeled by medial representations. The approach involves segmenting major tracts and fitting them with deformable geometric medial models. The medial representation makes it possible to average and combine tensor-based features along directions locally perpendicular to the tracts, thus reducing data dimensionality and accounting for errors in normalization. The framework enables the analysis of individual white matter structures, and provides a range of possibilities for computing statistics and visualizing differences between cohorts. The framework is demonstrated in a study of white matter differences in pediatric chromosome 22q11.2 deletion syndrome. PMID:18407524

  11. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation

    PubMed Central

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K.

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348

  12. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    PubMed

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  13. Dosimetric treatment course simulation based on a statistical model of deformable organ motion

    NASA Astrophysics Data System (ADS)

    Söhn, M.; Sobotta, B.; Alber, M.

    2012-06-01

    We present a method of modeling dosimetric consequences of organ deformation and correlated motion of adjacent organ structures in radiotherapy. Based on a few organ geometry samples and the respective deformation fields as determined by deformable registration, principal component analysis (PCA) is used to create a low-dimensional parametric statistical organ deformation model (Söhn et al 2005 Phys. Med. Biol. 50 5893-908). PCA determines the most important geometric variability in terms of eigenmodes, which represent 3D vector fields of correlated organ deformations around the mean geometry. Weighted sums of a few dominating eigenmodes can be used to simulate synthetic geometries, which are statistically meaningful inter- and extrapolations of the input geometries, and predict their probability of occurrence. We present the use of PCA as a versatile treatment simulation tool, which allows comprehensive dosimetric assessment of the detrimental effects that deformable geometric uncertainties can have on a planned dose distribution. For this, a set of random synthetic geometries is generated by a PCA model for each simulated treatment course, and the dose of a given treatment plan is accumulated in the moving tissue elements via dose warping. This enables the calculation of average voxel doses, local dose variability, dose-volume histogram uncertainties, marginal as well as joint probability distributions of organ equivalent uniform doses and thus of TCP and NTCP, and other dosimetric and biologic endpoints. The method is applied to the example of deformable motion of prostate/bladder/rectum in prostate IMRT. Applications include dosimetric assessment of the adequacy of margin recipes, adaptation schemes, etc, as well as prospective ‘virtual’ evaluation of the possible benefits of new radiotherapy schemes.

  14. Dosimetric treatment course simulation based on a statistical model of deformable organ motion.

    PubMed

    Söhn, M; Sobotta, B; Alber, M

    2012-06-21

    We present a method of modeling dosimetric consequences of organ deformation and correlated motion of adjacent organ structures in radiotherapy. Based on a few organ geometry samples and the respective deformation fields as determined by deformable registration, principal component analysis (PCA) is used to create a low-dimensional parametric statistical organ deformation model (Söhn et al 2005 Phys. Med. Biol. 50 5893-908). PCA determines the most important geometric variability in terms of eigenmodes, which represent 3D vector fields of correlated organ deformations around the mean geometry. Weighted sums of a few dominating eigenmodes can be used to simulate synthetic geometries, which are statistically meaningful inter- and extrapolations of the input geometries, and predict their probability of occurrence. We present the use of PCA as a versatile treatment simulation tool, which allows comprehensive dosimetric assessment of the detrimental effects that deformable geometric uncertainties can have on a planned dose distribution. For this, a set of random synthetic geometries is generated by a PCA model for each simulated treatment course, and the dose of a given treatment plan is accumulated in the moving tissue elements via dose warping. This enables the calculation of average voxel doses, local dose variability, dose-volume histogram uncertainties, marginal as well as joint probability distributions of organ equivalent uniform doses and thus of TCP and NTCP, and other dosimetric and biologic endpoints. The method is applied to the example of deformable motion of prostate/bladder/rectum in prostate IMRT. Applications include dosimetric assessment of the adequacy of margin recipes, adaptation schemes, etc, as well as prospective 'virtual' evaluation of the possible benefits of new radiotherapy schemes.

  15. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model.

    PubMed

    Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena

    2015-01-01

    We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.

  16. A holographic model of the Kondo effect

    NASA Astrophysics Data System (ADS)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Wu, Jackson

    2013-12-01

    We propose a model of the Kondo effect based on the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence, also known as holography. The Kondo effect is the screening of a magnetic impurity coupled anti-ferromagnetically to a bath of conduction electrons at low temperatures. In a (1+1)-dimensional CFT description, the Kondo effect is a renormalization group flow triggered by a marginally relevant (0+1)-dimensional operator between two fixed points with the same Kac-Moody current algebra. In the large- N limit, with spin SU( N) and charge U(1) symmetries, the Kondo effect appears as a (0+1)-dimensional second-order mean-field transition in which the U(1) charge symmetry is spontaneously broken. Our holographic model, which combines the CFT and large- N descriptions, is a Chern-Simons gauge field in (2+1)-dimensional AdS space, AdS 3, dual to the Kac-Moody current, coupled to a holographic superconductor along an AdS 2 sub-space. Our model exhibits several characteristic features of the Kondo effect, including a dynamically generated scale, a resistivity with power-law behavior in temperature at low temperatures, and a spectral flow producing a phase shift. Our holographic Kondo model may be useful for studying many open problems involving impurities, including for example the Kondo lattice problem.

  17. Forecasting runout of rock and debris avalanches

    USGS Publications Warehouse

    Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.

    2006-01-01

    Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.

  18. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    PubMed

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  19. Differences in aquatic habitat quality as an impact of one- and two-dimensional hydrodynamic model simulated flow variables

    NASA Astrophysics Data System (ADS)

    Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.

    2013-12-01

    Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches

  20. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    PubMed

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  1. Regression modeling of ground-water flow

    USGS Publications Warehouse

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  2. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  3. From Planck Data to Planck Era: Observational Tests of Holographic Cosmology

    NASA Astrophysics Data System (ADS)

    Afshordi, Niayesh; Corianò, Claudio; Delle Rose, Luigi; Gould, Elizabeth; Skenderis, Kostas

    2017-01-01

    We test a class of holographic models for the very early Universe against cosmological observations and find that they are competitive to the standard cold dark matter model with a cosmological constant (Λ CDM ) of cosmology. These models are based on three-dimensional perturbative superrenormalizable quantum field theory (QFT), and, while they predict a different power spectrum from the standard power law used in Λ CDM , they still provide an excellent fit to the data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that Λ CDM does a better job globally, while the holographic models provide a (marginally) better fit to the data without very low multipoles (i.e., l ≲30 ), where the QFT becomes nonperturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: The data rule out the dual theory being a Yang-Mills theory coupled to fermions only but allow for a Yang-Mills theory coupled to nonminimal scalars with quartic interactions. Lattice simulations of 3D QFTs can provide nonperturbative predictions for large-angle statistics of the cosmic microwave background and potentially explain its apparent anomalies.

  4. From Planck Data to Planck Era: Observational Tests of Holographic Cosmology.

    PubMed

    Afshordi, Niayesh; Corianò, Claudio; Delle Rose, Luigi; Gould, Elizabeth; Skenderis, Kostas

    2017-01-27

    We test a class of holographic models for the very early Universe against cosmological observations and find that they are competitive to the standard cold dark matter model with a cosmological constant (ΛCDM) of cosmology. These models are based on three-dimensional perturbative superrenormalizable quantum field theory (QFT), and, while they predict a different power spectrum from the standard power law used in ΛCDM, they still provide an excellent fit to the data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that ΛCDM does a better job globally, while the holographic models provide a (marginally) better fit to the data without very low multipoles (i.e., l≲30), where the QFT becomes nonperturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: The data rule out the dual theory being a Yang-Mills theory coupled to fermions only but allow for a Yang-Mills theory coupled to nonminimal scalars with quartic interactions. Lattice simulations of 3D QFTs can provide nonperturbative predictions for large-angle statistics of the cosmic microwave background and potentially explain its apparent anomalies.

  5. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  6. Three-dimensional Numerical Simulations of Rayleigh-Taylor Unstable Flames in Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Zingale, M.; Woosley, S. E.; Rendleman, C. A.; Day, M. S.; Bell, J. B.

    2005-10-01

    Flame instabilities play a dominant role in accelerating the burning front to a large fraction of the speed of sound in a Type Ia supernova. We present a three-dimensional numerical simulation of a Rayleigh-Taylor unstable carbon flame, following its evolution through the transition to turbulence. A low-Mach number hydrodynamics method is used, freeing us from the harsh time step restrictions imposed by sound waves. We fully resolve the thermal structure of the flame and its reaction zone, eliminating the need for a flame model. A single density is considered, 1.5×107 g cm-3, and half-carbon, half-oxygen fuel: conditions under which the flame propagated in the flamelet regime in our related two-dimensional study. We compare to a corresponding two-dimensional simulation and show that while fire polishing keeps the small features suppressed in two dimensions, turbulence wrinkles the flame on far smaller scales in the three-dimensional case, suggesting that the transition to the distributed burning regime occurs at higher densities in three dimensions. Detailed turbulence diagnostics are provided. We show that the turbulence follows a Kolmogorov spectrum and is highly anisotropic on the large scales, with a much larger integral scale in the direction of gravity. Furthermore, we demonstrate that it becomes more isotropic as it cascades down to small scales. On the basis of the turbulent statistics and the flame properties of our simulation, we compute the Gibson scale. We show the progress of the turbulent flame through a classic combustion regime diagram, indicating that the flame just enters the distributed burning regime near the end of our simulation.

  7. Collaborative classification of hyperspectral and visible images with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2017-10-01

    Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.

  8. Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling

    PubMed Central

    Wood, John

    2017-01-01

    Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080

  9. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  10. Brane surgery: energy conditions, traversable wormholes, and voids

    NASA Astrophysics Data System (ADS)

    Barceló1, C.; Visser, M.

    2000-09-01

    Branes are ubiquitous elements of any low-energy limit of string theory. We point out that negative tension branes violate all the standard energy conditions of the higher-dimensional spacetime they are embedded in; this opens the door to very peculiar solutions of the higher-dimensional Einstein equations. Building upon the (/3+1)-dimensional implementation of fundamental string theory, we illustrate the possibilities by considering a toy model consisting of a (/2+1)-dimensional brane propagating through our observable (/3+1)-dimensional universe. Developing a notion of ``brane surgery'', based on the Israel-Lanczos-Sen ``thin shell'' formalism of general relativity, we analyze the dynamics and find traversable wormholes, closed baby universes, voids (holes in the spacetime manifold), and an evasion (not a violation) of both the singularity theorems and the positive mass theorem. These features appear generic to any brane model that permits negative tension branes: This includes the Randall-Sundrum models and their variants.

  11. Intensification of heat transfer across falling liquid films

    NASA Astrophysics Data System (ADS)

    Ruyer-Quil, Christian; Cellier, Nicolas; Stutz, Benoit; Caney, Nadia; Bandelier, Philippe; Locie Team; Legi Team

    2017-11-01

    The wavy motion of a liquid film is well known to intensify heat or mass transfers. Yet, if film thinning and wave merging are generally invoked, the physical mechanisms which enable this intensification are still unclear. We propose a systematic investigation of the impact of wavy motions on the heat transfer across 2D falling films on hot plates as a function of the inlet frequency and flow parameters. Computations over extended domains and for sufficient durations to achieve statistically established flows have been made possible by low-dimensional modeling and the development of a fast temporal solver based on graph optimizations. Heat transfer has been modeled using the weighted residual technique as a set of two evolution equations for the free-surface temperature and the wall heat flux. This new model solves the shortcomings of previous attempts, namely their inability to capture the onset of thermal boundary layers in large-amplitude waves and their limitation to low Prandtl numbers. Our study reveals that heat transfer is enhanced at the crests of the waves and that heat transfer intensification is maximum at the maximum of density of wave crests, which does not correspond to the natural wavy regime (no inlet forcing). Supports from Institut Universitaire de France and Région Auvergne-Rhones-Alpes are warmly acknowledged.

  12. Low-dimensional attractor for neural activity from local field potentials in optogenetic mice

    PubMed Central

    Oprisan, Sorinel A.; Lynn, Patrick E.; Tompa, Tamas; Lavin, Antonieta

    2015-01-01

    We used optogenetic mice to investigate possible nonlinear responses of the medial prefrontal cortex (mPFC) local network to light stimuli delivered by a 473 nm laser through a fiber optics. Every 2 s, a brief 10 ms light pulse was applied and the local field potentials (LFPs) were recorded with a 10 kHz sampling rate. The experiment was repeated 100 times and we only retained and analyzed data from six animals that showed stable and repeatable response to optical stimulations. The presence of nonlinearity in our data was checked using the null hypothesis that the data were linearly correlated in the temporal domain, but were random otherwise. For each trail, 100 surrogate data sets were generated and both time reversal asymmetry and false nearest neighbor (FNN) were used as discriminating statistics for the null hypothesis. We found that nonlinearity is present in all LFP data. The first 0.5 s of each 2 s LFP recording were dominated by the transient response of the networks. For each trial, we used the last 1.5 s of steady activity to measure the phase resetting induced by the brief 10 ms light stimulus. After correcting the LFPs for the effect of phase resetting, additional preprocessing was carried out using dendrograms to identify “similar” groups among LFP trials. We found that the steady dynamics of mPFC in response to light stimuli could be reconstructed in a three-dimensional phase space with topologically similar “8”-shaped attractors across different animals. Our results also open the possibility of designing a low-dimensional model for optical stimulation of the mPFC local network. PMID:26483665

  13. Empirically derived personality subtyping for predicting clinical symptoms and treatment response in bulimia nervosa.

    PubMed

    Haynos, Ann F; Pearson, Carolyn M; Utzinger, Linsey M; Wonderlich, Stephen A; Crosby, Ross D; Mitchell, James E; Crow, Scott J; Peterson, Carol B

    2017-05-01

    Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Using variables from the Dimensional Assessment of Personality Pathology-Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = 0.03) and purging (p = 0.01) frequency at EOT and binge eating frequency at follow-up (p = 0.045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = 0.04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Empirically derived personality subtyping appears to be a valid classification system with potential to guide eating disorder treatment decisions. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2017; 50:506-514). © 2016 Wiley Periodicals, Inc.

  14. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  15. Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primm, Trent; Ruggles, Arthur; Freels, James D

    2009-03-01

    A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluidmore » model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.« less

  16. Assessing the Impact of Climate Change on Stream Temperatures in the Methow River Basin, Washington

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, S.; Caldwell, R. J.; Lai, Y.; Bountry, J.

    2011-12-01

    The Methow River in Washington offers prime spawning habitat for salmon and other cold-water fishes. During the summer months, low streamflows on the Methow result in cutoff side channels that limit the habitat available to these fishes. Future climate scenarios of increasing air temperature and decreasing precipitation suggest the potential for increasing loss of habitat and fish mortality as stream temperatures rise in response to lower flows and additional heating. To assess the impacts of climate change on stream temperature in the Methow River, the US Bureau of Reclamation is developing an hourly time-step, two-dimensional hydraulic model of the confluence of the Methow and Chewuch Rivers above Winthrop. The model will be coupled with a physical stream temperature model to generate spatial representations of stream conditions conducive for fish habitat. In this study, we develop a statistical framework for generating stream temperature time series from global climate model (GCM) and hydrologic model outputs. Regional observations of stream temperature and hydrometeorological conditions are used to develop statistical models of daily mean stream temperature for the Methow River at Winthrop, WA. Temperature and precipitation projections from 10 global climate models (GCMs) are coupled with the streamflow generated using the University of Washington Variable Infiltration Capacity model. The projections serve as input to the statistical models to generate daily time series of mean daily stream temperature. Since the output from the GCM, VIC, and statistical models offer only daily data, a k-nearest neighbor (k-nn) resampling technique is employed to select appropriate proportion vectors for disaggregating the Winthrop daily flow and temperature to an upstream location on each of the rivers above the confluence. Hourly proportion vectors are then used to disaggregate the daily flow and temperature to hourly values to be used in the hydraulic model. Historical meteorological variables are also selected using the k-nn method. We present the statistical modeling framework using Generalized Linear Models (GLMs), along with diagnostics and measurements of skill. We will also provide a comparison of the stream temperature projections from the future years of 2020, 2040, and 2080 and discuss the potential implications on fish habitat in the Methow River. Future integration of the hourly climate scenarios in the hydraulic model will provide the ability to assess the spatial extent of habitat impacts and allow the USBR to evaluate the effectiveness of various river restoration projects in maintaining or improving habitat in a changing climate.

  17. Low-lying Photoexcited States of a One-Dimensional Ionic Extended Hubbard Model

    NASA Astrophysics Data System (ADS)

    Yokoi, Kota; Maeshima, Nobuya; Hino, Ken-ichi

    2017-10-01

    We investigate the properties of low-lying photoexcited states of a one-dimensional (1D) ionic extended Hubbard model at half-filling. Numerical analysis by using the full and Lanczos diagonalization methods shows that, in the ionic phase, there exist low-lying photoexcited states below the charge transfer gap. As a result of comparison with numerical data for the 1D antiferromagnetic (AF) Heisenberg model, it was found that, for a small alternating potential Δ, these low-lying photoexcited states are spin excitations, which is consistent with a previous analytical study [Katsura et al., Phys. Rev. Lett. 103, 177402 (2009)]. As Δ increases, the spectral intensity of the 1D ionic extended Hubbard model rapidly deviates from that of the 1D AF Heisenberg model and it is clarified that this deviation is due to the neutral-ionic domain wall, an elementary excitation near the neutral-ionic transition point.

  18. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    NASA Astrophysics Data System (ADS)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  19. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.

  20. A Three Dimensional Kinematic and Kinetic Study of the Golf Swing

    PubMed Central

    Nesbit, Steven M.

    2005-01-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key Points Full-body model of the golf swing. Mechanical description of the golf swing. Statistical analysis of golf swing mechanics. Comparisons of subject swing mechanics PMID:24627665

  1. A three dimensional kinematic and kinetic study of the golf swing.

    PubMed

    Nesbit, Steven M

    2005-12-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key PointsFull-body model of the golf swing.Mechanical description of the golf swing.Statistical analysis of golf swing mechanics.Comparisons of subject swing mechanics.

  2. Effect of strong disorder on three-dimensional chiral topological insulators: Phase diagrams, maps of the bulk invariant, and existence of topological extended bulk states

    NASA Astrophysics Data System (ADS)

    Song, Juntao; Fine, Carolyn; Prodan, Emil

    2014-11-01

    The effect of strong disorder on chiral-symmetric three-dimensional lattice models is investigated via analytical and numerical methods. The phase diagrams of the models are computed using the noncommutative winding number, as functions of disorder strength and model's parameters. The localized/delocalized characteristic of the quantum states is probed with level statistics analysis. Our study reconfirms the accurate quantization of the noncommutative winding number in the presence of strong disorder, and its effectiveness as a numerical tool. Extended bulk states are detected above and below the Fermi level, which are observed to undergo the so-called "levitation and pair annihilation" process when the system is driven through a topological transition. This suggests that the bulk invariant is carried by these extended states, in stark contrast with the one-dimensional case where the extended states are completely absent and the bulk invariant is carried by the localized states.

  3. Hyperchaotic Dynamics for Light Polarization in a Laser Diode

    NASA Astrophysics Data System (ADS)

    Bonatto, Cristian

    2018-04-01

    It is shown that a highly randomlike behavior of light polarization states in the output of a free-running laser diode, covering the whole Poincaré sphere, arises as a result from a fully deterministic nonlinear process, which is characterized by a hyperchaotic dynamics of two polarization modes nonlinearly coupled with a semiconductor medium, inside the optical cavity. A number of statistical distributions were found to describe the deterministic data of the low-dimensional nonlinear flow, such as lognormal distribution for the light intensity, Gaussian distributions for the electric field components and electron densities, Rice and Rayleigh distributions, and Weibull and negative exponential distributions, for the modulus and intensity of the orthogonal linear components of the electric field, respectively. The presented results could be relevant for the generation of single units of compact light source devices to be used in low-dimensional optical hyperchaos-based applications.

  4. Anisotropic dielectric properties of two-dimensional matrix in pseudo-spin ferroelectric system

    NASA Astrophysics Data System (ADS)

    Kim, Se-Hun

    2016-10-01

    The anisotropic dielectric properties of a two-dimensional (2D) ferroelectric system were studied using the statistical calculation of the pseudo-spin Ising Hamiltonian model. It is necessary to delay the time for measurements of the observable and the independence of the new spin configuration under Monte Carlo sampling, in which the thermal equilibrium state depends on the temperature and size of the system. The autocorrelation time constants of the normalized relaxation function were determined by taking temperature and 2D lattice size into account. We discuss the dielectric constants of a two-dimensional ferroelectric system by using the Metropolis method in view of the Slater-Takagi defect energies.

  5. Multiscale study for stochastic characterization of shale samples

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad; Piri, Mohammad

    2016-03-01

    Characterization of shale reservoirs, which are typically of low permeability, is very difficult because of the presence of multiscale structures. While three-dimensional (3D) imaging can be an ultimate solution for revealing important complexities of such reservoirs, acquiring such images is costly and time consuming. On the other hand, high-quality 2D images, which are widely available, also reveal useful information about shales' pore connectivity and size. Most of the current modeling methods that are based on 2D images use limited and insufficient extracted information. One remedy to the shortcoming is direct use of qualitative images, a concept that we introduce in this paper. We demonstrate that higher-order statistics (as opposed to the traditional two-point statistics, such as variograms) are necessary for developing an accurate model of shales, and describe an efficient method for using 2D images that is capable of utilizing qualitative and physical information within an image and generating stochastic realizations of shales. We then further refine the model by describing and utilizing several techniques, including an iterative framework, for removing some possible artifacts and better pattern reproduction. Next, we introduce a new histogram-matching algorithm that accounts for concealed nanostructures in shale samples. We also present two new multiresolution and multiscale approaches for dealing with distinct pore structures that are common in shale reservoirs. In the multiresolution method, the original high-quality image is upscaled in a pyramid-like manner in order to achieve more accurate global and long-range structures. The multiscale approach integrates two images, each containing diverse pore networks - the nano- and microscale pores - using a high-resolution image representing small-scale pores and, at the same time, reconstructing large pores using a low-quality image. Eventually, the results are integrated to generate a 3D model. The methods are tested on two shale samples for which full 3D samples are available. The quantitative accuracy of the models is demonstrated by computing their morphological and flow properties and comparing them with those of the actual 3D images. The success of the method hinges upon the use of very different low- and high-resolution images.

  6. Generative Topographic Mapping of Conformational Space.

    PubMed

    Horvath, Dragos; Baskin, Igor; Marcou, Gilles; Varnek, Alexandre

    2017-10-01

    Herein, Generative Topographic Mapping (GTM) was challenged to produce planar projections of the high-dimensional conformational space of complex molecules (the 1LE1 peptide). GTM is a probability-based mapping strategy, and its capacity to support property prediction models serves to objectively assess map quality (in terms of regression statistics). The properties to predict were total, non-bonded and contact energies, surface area and fingerprint darkness. Map building and selection was controlled by a previously introduced evolutionary strategy allowed to choose the best-suited conformational descriptors, options including classical terms and novel atom-centric autocorrellograms. The latter condensate interatomic distance patterns into descriptors of rather low dimensionality, yet precise enough to differentiate between close favorable contacts and atom clashes. A subset of 20 K conformers of the 1LE1 peptide, randomly selected from a pool of 2 M geometries (generated by the S4MPLE tool) was employed for map building and cross-validation of property regression models. The GTM build-up challenge reached robust three-fold cross-validated determination coefficients of Q 2 =0.7…0.8, for all modeled properties. Mapping of the full 2 M conformer set produced intuitive and information-rich property landscapes. Functional and folding subspaces appear as well-separated zones, even though RMSD with respect to the PDB structure was never used as a selection criterion of the maps. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Modeled structure of trypanothione reductase of Leishmania infantum.

    PubMed

    Singh, Bishal K; Sarkar, Nandini; Jagannadham, M V; Dubey, Vikash K

    2008-06-30

    Trypanothione reductase is an important target enzyme for structure-based drug design against Leishmania. We used homology modeling to construct a three-dimensional structure of the trypanothione reductase (TR) of Leishmania infantum. The structure shows acceptable Ramachandran statistics and a remarkably different active site from glutathione reductase(GR). Thus, a specific inhibitor against TR can be designed without interfering with host (human) GR activity.

  8. Low-order modeling of internal heat transfer in biomass particle pyrolysis

    DOE PAGES

    Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.

    2016-05-11

    We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less

  9. Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart

    2016-06-16

    We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less

  10. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  11. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.

  12. Statistical Analysis of Big Data on Pharmacogenomics

    PubMed Central

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  13. Data assimilation and bathymetric inversion in a two-dimensional horizontal surf zone model

    NASA Astrophysics Data System (ADS)

    Wilson, G. W.; Ã-Zkan-Haller, H. T.; Holman, R. A.

    2010-12-01

    A methodology is described for assimilating observations in a steady state two-dimensional horizontal (2-DH) model of nearshore hydrodynamics (waves and currents), using an ensemble-based statistical estimator. In this application, we treat bathymetry as a model parameter, which is subject to a specified prior uncertainty. The statistical estimator uses state augmentation to produce posterior (inverse, updated) estimates of bathymetry, wave height, and currents, as well as their posterior uncertainties. A case study is presented, using data from a 2-D array of in situ sensors on a natural beach (Duck, NC). The prior bathymetry is obtained by interpolation from recent bathymetric surveys; however, the resulting prior circulation is not in agreement with measurements. After assimilating data (significant wave height and alongshore current), the accuracy of modeled fields is improved, and this is quantified by comparing with observations (both assimilated and unassimilated). Hence, for the present data, 2-DH bathymetric uncertainty is an important source of error in the model and can be quantified and corrected using data assimilation. Here the bathymetric uncertainty is ascribed to inadequate temporal sampling; bathymetric surveys were conducted on a daily basis, but bathymetric change occurred on hourly timescales during storms, such that hydrodynamic model skill was significantly degraded. Further tests are performed to analyze the model sensitivities used in the assimilation and to determine the influence of different observation types and sampling schemes.

  14. The potential of 2D Kalman filtering for soil moisture data assimilation

    USDA-ARS?s Scientific Manuscript database

    We examine the potential for parameterizing a two-dimensional (2D) land data assimilation system using spatial error auto-correlation statistics gleaned from a triple collocation analysis and the triplet of: (1) active microwave-, (2) passive microwave- and (3) land surface model-based surface soil ...

  15. The applications of Complexity Theory and Tsallis Non-extensive Statistics at Solar Plasma Dynamics

    NASA Astrophysics Data System (ADS)

    Pavlos, George

    2015-04-01

    As the solar plasma lives far from equilibrium it is an excellent laboratory for testing complexity theory and non-equilibrium statistical mechanics. In this study, we present the highlights of complexity theory and Tsallis non extensive statistical mechanics as concerns their applications at solar plasma dynamics, especially at sunspot, solar flare and solar wind phenomena. Generally, when a physical system is driven far from equilibrium states some novel characteristics can be observed related to the nonlinear character of dynamics. Generally, the nonlinearity in space plasma dynamics can generate intermittent turbulence with the typical characteristics of the anomalous diffusion process and strange topologies of stochastic space plasma fields (velocity and magnetic fields) caused by the strange dynamics and strange kinetics (Zaslavsky, 2002). In addition, according to Zelenyi and Milovanov (2004) the complex character of the space plasma system includes the existence of non-equilibrium (quasi)-stationary states (NESS) having the topology of a percolating fractal set. The stabilization of a system near the NESS is perceived as a transition into a turbulent state determined by self-organization processes. The long-range correlation effects manifest themselves as a strange non-Gaussian behavior of kinetic processes near the NESS plasma state. The complex character of space plasma can also be described by the non-extensive statistical thermodynamics pioneered by Tsallis, which offers a consistent and effective theoretical framework, based on a generalization of Boltzmann - Gibbs (BG) entropy, to describe far from equilibrium nonlinear complex dynamics (Tsallis, 2009). In a series of recent papers, the hypothesis of Tsallis non-extensive statistics in magnetosphere, sunspot dynamics, solar flares, solar wind and space plasma in general, was tested and verified (Karakatsanis et al., 2013; Pavlos et al., 2014; 2015). Our study includes the analysis of solar plasma time series at three cases: sunspot index, solar flare and solar wind data. The non-linear analysis of the sunspot index is embedded in the non-extensive statistical theory of Tsallis (1988; 2004; 2009). The q-triplet of Tsallis, as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the SVD components of the sunspot index timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using the q-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000, 2001). Our analysis showed clearly the following: (a) a phase transition process in the solar dynamics from high dimensional non-Gaussian SOC state to a low dimensional non-Gaussian chaotic state, (b) strong intermittent solar turbulence and anomalous (multifractal) diffusion solar process, which is strengthened as the solar dynamics makes a phase transition to low dimensional chaos in accordance to Ruzmaikin, Zelenyi and Milovanov's studies (Zelenyi and Milovanov, 1991; Milovanov and Zelenyi, 1993; Ruzmakin et al., 1996), (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of: (i) non-Gaussian probability distribution function P(x), (ii) multifractal scaling exponent spectrum f(a) and generalized Renyi dimension spectrum Dq, (iii) exponent spectrum J(p) of the structure functions estimated for the sunspot index and its underlying non equilibrium solar dynamics. Also, the q-triplet of Tsallis as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the singular value decomposition (SVD) components of the solar flares timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using the q-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000). Our analysis showed clearly the following: (a) a phase transition process in the solar flare dynamics from a high dimensional non-Gaussian self-organized critical (SOC) state to a low dimensional also non-Gaussian chaotic state, (b) strong intermittent solar corona turbulence and an anomalous (multifractal) diffusion solar corona process, which is strengthened as the solar corona dynamics makes a phase transition to low dimensional chaos, (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of the functions: (i) non-Gaussian probability distribution function P(x), (ii) f(a) and D(q), and (iii) J(p) for the solar flares timeseries and its underlying non-equilibrium solar dynamics, and (d) the solar flare dynamical profile is revealed similar to the dynamical profile of the solar corona zone as far as the phase transition process from self-organized criticality (SOC) to chaos state. However the solar low corona (solar flare) dynamical characteristics can be clearly discriminated from the dynamical characteristics of the solar convection zone. At last we present novel results revealing non-equilibrium phase transition processes in the solar wind plasma during a strong shock event, which can take place in Solar wind plasma system. The solar wind plasma as well as the entire solar plasma system is a typical case of stochastic spatiotemporal distribution of physical state variables such as force fields ( ) and matter fields (particle and current densities or bulk plasma distributions). This study shows clearly the non-extensive and non-Gaussian character of the solar wind plasma and the existence of multi-scale strong correlations from the microscopic to the macroscopic level. It also underlines the inefficiency of classical magneto-hydro-dynamic (MHD) or plasma statistical theories, based on the classical central limit theorem (CLT), to explain the complexity of the solar wind dynamics, since these theories include smooth and differentiable spatial-temporal functions (MHD theory) or Gaussian statistics (Boltzmann-Maxwell statistical mechanics). On the contrary, the results of this study indicate the presence of non-Gaussian non-extensive statistics with heavy tails probability distribution functions, which are related to the q-extension of CLT. Finally, the results of this study can be understood in the framework of modern theoretical concepts such as non-extensive statistical mechanics (Tsallis, 2009), fractal topology (Zelenyi and Milovanov, 2004), turbulence theory (Frisch, 1996), strange dynamics (Zaslavsky, 2002), percolation theory (Milovanov, 1997), anomalous diffusion theory and anomalous transport theory (Milovanov, 2001), fractional dynamics (Tarasov, 2013) and non-equilibrium phase transition theory (Chang, 1992). References 1. T. Arimitsu, N. Arimitsu, Tsallis statistics and fully developed turbulence, J. Phys. A: Math. Gen. 33 (2000) L235. 2. T. Arimitsu, N. Arimitsu, Analysis of turbulence by statistics based on generalized entropies, Physica A 295 (2001) 177-194. 3. T. Chang, Low-dimensional behavior and symmetry braking of stochastic systems near criticality can these effects be observed in space and in the laboratory, IEEE 20 (6) (1992) 691-694. 4. U. Frisch, Turbulence, Cambridge University Press, Cambridge, UK, 1996, p. 310. 5. L.P. Karakatsanis, G.P. Pavlos, M.N. Xenakis, Tsallis non-extensive statistics, intermittent turbulence, SOC and chaos in the solar plasma. Part two: Solar flares dynamics, Physica A 392 (2013) 3920-3944. 6. A.V. Milovanov, Topological proof for the Alexander-Orbach conjecture, Phys. Rev. E 56 (3) (1997) 2437-2446. 7. A.V. Milovanov, L.M. Zelenyi, Fracton excitations as a driving mechanism for the self-organized dynamical structuring in the solar wind, Astrophys. Space Sci. 264 (1-4) (1999) 317-345. 8. A.V. Milovanov, Stochastic dynamics from the fractional Fokker-Planck-Kolmogorov equation: large-scale behavior of the turbulent transport coefficient, Phys. Rev. E 63 (2001) 047301. 9. G.P. Pavlos, et al., Universality of non-extensive Tsallis statistics and time series analysis: Theory and applications, Physica A 395 (2014) 58-95. 10. G.P. Pavlos, et al., Tsallis non-extensive statistics and solar wind plasma complexity, Physica A 422 (2015) 113-135. 11. A.A. Ruzmaikin, et al., Spectral properties of solar convection and diffusion, ApJ 471 (1996) 1022. 12. V.E. Tarasov, Review of some promising fractional physical models, Internat. J. Modern Phys. B 27 (9) (2013) 1330005. 13. C. Tsallis, Possible generalization of BG statistics, J. Stat. Phys. J 52 (1-2) (1988) 479-487. 14. C. Tsallis, Nonextensive statistical mechanics: construction and physical interpretation, in: G.M. Murray, C. Tsallis (Eds.), Nonextensive Entropy-Interdisciplinary Applications, Oxford Univ. Press, 2004, pp. 1-53. 15. C. Tsallis, Introduction to Non-Extensive Statistical Mechanics, Springer, 2009. 16. G.M. Zaslavsky, Chaos, fractional kinetics, and anomalous transport, Physics Reports 371 (2002) 461-580. 17. L.M. Zelenyi, A.V. Milovanov, Fractal properties of sunspots, Sov. Astron. Lett. 17 (6) (1991) 425. 18. L.M. Zelenyi, A.V. Milovanov, Fractal topology and strange kinetics: from percolation theory to problems in cosmic electrodynamics, Phys.-Usp. 47 (8), (2004) 749-788.

  16. Two-dimensional dissipative rogue waves due to time-delayed feedback in cavity nonlinear optics.

    PubMed

    Tlidi, Mustapha; Panajotov, Krassimir

    2017-01-01

    We demonstrate a way to generate two-dimensional rogue waves in two types of broad area nonlinear optical systems subject to time-delayed feedback: in the generic Lugiato-Lefever model and in the model of a broad-area surface-emitting laser with saturable absorber. The delayed feedback is found to induce a spontaneous formation of rogue waves. In the absence of delayed feedback, spatial pulses are stationary. The rogue waves are exited and controlled by the delay feedback. We characterize their formation by computing the probability distribution of the pulse height. The long-tailed statistical contribution, which is often considered as a signature of the presence of rogue waves, appears for sufficiently strong feedback. The generality of our analysis suggests that the feedback induced instability leading to the spontaneous formation of two-dimensional rogue waves is a universal phenomenon.

  17. Penalized Ordinal Regression Methods for Predicting Stage of Cancer in High-Dimensional Covariate Spaces.

    PubMed

    Gentry, Amanda Elswick; Jackson-Cook, Colleen K; Lyon, Debra E; Archer, Kellie J

    2015-01-01

    The pathological description of the stage of a tumor is an important clinical designation and is considered, like many other forms of biomedical data, an ordinal outcome. Currently, statistical methods for predicting an ordinal outcome using clinical, demographic, and high-dimensional correlated features are lacking. In this paper, we propose a method that fits an ordinal response model to predict an ordinal outcome for high-dimensional covariate spaces. Our method penalizes some covariates (high-throughput genomic features) without penalizing others (such as demographic and/or clinical covariates). We demonstrate the application of our method to predict the stage of breast cancer. In our model, breast cancer subtype is a nonpenalized predictor, and CpG site methylation values from the Illumina Human Methylation 450K assay are penalized predictors. The method has been made available in the ordinalgmifs package in the R programming environment.

  18. Image statistics underlying natural texture selectivity of neurons in macaque V4

    PubMed Central

    Okazawa, Gouki; Tajima, Satohiro; Komatsu, Hidehiko

    2015-01-01

    Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception. PMID:25535362

  19. Confocal Imaging of porous media

    NASA Astrophysics Data System (ADS)

    Shah, S.; Crawshaw, D.; Boek, D.

    2012-12-01

    Carbonate rocks, which hold approximately 50% of the world's oil and gas reserves, have a very complicated and heterogeneous structure in comparison with sandstone reservoir rock. We present advances with different techniques to image, reconstruct, and characterize statistically the micro-geometry of carbonate pores. The main goal here is to develop a technique to obtain two dimensional and three dimensional images using Confocal Laser Scanning Microscopy. CLSM is used in epi-fluorescent imaging mode, allowing for the very high optical resolution of features well below 1μm size. Images of pore structures were captured using CLSM imaging where spaces in the carbonate samples were impregnated with a fluorescent, dyed epoxy-resin, and scanned in the x-y plane by a laser probe. We discuss the sample preparation in detail for Confocal Imaging to obtain sub-micron resolution images of heterogeneous carbonate rocks. We also discuss the technical and practical aspects of this imaging technique, including its advantages and limitation. We present several examples of this application, including studying pore geometry in carbonates, characterizing sub-resolution porosity in two dimensional images. We then describe approaches to extract statistical information about porosity using image processing and spatial correlation function. We have managed to obtain very low depth information in z -axis (~ 50μm) to develop three dimensional images of carbonate rocks with the current capabilities and limitation of CLSM technique. Hence, we have planned a novel technique to obtain higher depth information to obtain high three dimensional images with sub-micron resolution possible in the lateral and axial planes.

  20. On the Sensitivity of Atmospheric Ensembles to Cloud Microphysics in Long-Term Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Lang, Stephen; Hou, Arthur Y.; Zhang, Minghua; Simpson, Joanne

    2008-01-01

    Month-long large-scale forcing data from two field campaigns are used to drive a cloud-resolving model (CRM) and produce ensemble simulations of clouds and precipitation. Observational data are then used to evaluate the model results. To improve the model results, a new parameterization of the Bergeron process is proposed that incorporates the number concentration of ice nuclei (IN). Numerical simulations reveal that atmospheric ensembles are sensitive to IN concentration and ice crystal multiplication. Two- (2D) and three-dimensional (3D) simulations are carried out to address the sensitivity of atmospheric ensembles to model dimensionality. It is found that the ensembles with high IN concentration are more sensitive to dimensionality than those with low IN concentration. Both the analytic solutions of linear dry models and the CRM output show that there are more convective cores with stronger updrafts in 3D simulations than in 2D, which explains the differing sensitivity of the ensembles to dimensionality at different IN concentrations.

Top