Sample records for computationally tractable model

  1. The tractable cognition thesis.

    PubMed

    Van Rooij, Iris

    2008-09-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories of cognition. To utilize this constraint, a precise and workable definition of "computational tractability" is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial-time computability, leading to the P-Cognition thesis. This article explains how and why the P-Cognition thesis may be overly restrictive, risking the exclusion of veridical computational-level theories from scientific investigation. An argument is made to replace the P-Cognition thesis by the FPT-Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed-parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified. 2008 Cognitive Science Society, Inc.

  2. A computationally tractable version of the collective model

    NASA Astrophysics Data System (ADS)

    Rowe, D. J.

    2004-05-01

    A computationally tractable version of the Bohr-Mottelson collective model is presented which makes it possible to diagonalize realistic collective models and obtain convergent results in relatively small appropriately chosen subspaces of the collective model Hilbert space. Special features of the proposed model are that it makes use of the beta wave functions given analytically by the softened-beta version of the Wilets-Jean model, proposed by Elliott et al., and a simple algorithm for computing SO(5)⊃SO(3) spherical harmonics. The latter has much in common with the methods of Chacon, Moshinsky, and Sharp but is conceptually and computationally simpler. Results are presented for collective models ranging from the spherical vibrator to the Wilets-Jean and axially symmetric rotor-vibrator models.

  3. The Tractable Cognition Thesis

    ERIC Educational Resources Information Center

    van Rooij, Iris

    2008-01-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the "Tractable Cognition thesis": Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories…

  4. Computational Nonlinear Morphology with Emphasis on Semitic Languages. Studies in Natural Language Processing.

    ERIC Educational Resources Information Center

    Kiraz, George Anton

    This book presents a tractable computational model that can cope with complex morphological operations, especially in Semitic languages, and less complex morphological systems present in Western languages. It outlines a new generalized regular rewrite rule system that uses multiple finite-state automata to cater to root-and-pattern morphology,…

  5. Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, Brian James; Yin, Lin; Stark, David James

    One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem

  6. Parametric Study of a YAV-8B Harrier in Ground Effect Using Time-Dependent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.

  7. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data.

    PubMed

    Tom, Jennifer A; Sinsheimer, Janet S; Suchard, Marc A

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, often compared using point estimates that fail to account for the variability within and correlation between the distributions these realizations approximate. However, although the initial concession to stratify generally precludes the more sensible analysis using a single joint hierarchical model, we can circumvent this outcome and capitalize on the intermediate realizations by extending the dynamic iterative reweighting MCMC algorithm. In doing so, we reuse the available realizations by reweighting them with importance weights, recycling them into a now tractable joint hierarchical model. We apply this technique to intermediate realizations generated from stratified analyses of 687 influenza A genomes spanning 13 years allowing us to revisit hypotheses regarding the evolutionary history of influenza within a hierarchical statistical framework.

  8. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data

    PubMed Central

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2015-01-01

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, often compared using point estimates that fail to account for the variability within and correlation between the distributions these realizations approximate. However, although the initial concession to stratify generally precludes the more sensible analysis using a single joint hierarchical model, we can circumvent this outcome and capitalize on the intermediate realizations by extending the dynamic iterative reweighting MCMC algorithm. In doing so, we reuse the available realizations by reweighting them with importance weights, recycling them into a now tractable joint hierarchical model. We apply this technique to intermediate realizations generated from stratified analyses of 687 influenza A genomes spanning 13 years allowing us to revisit hypotheses regarding the evolutionary history of influenza within a hierarchical statistical framework. PMID:26681992

  9. Learning-based stochastic object models for use in optimizing imaging systems

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua

    2017-03-01

    It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.

  10. A Tractable Numerical Model for Exploring Nonadiabatic Quantum Dynamics

    ERIC Educational Resources Information Center

    Camrud, Evan; Turner, Daniel B.

    2017-01-01

    Numerous computational and spectroscopic studies have demonstrated the decisive role played by nonadiabatic coupling in photochemical reactions. Nonadiabatic coupling drives photochemistry when potential energy surfaces are nearly degenerate at avoided crossings or truly degenerate at unavoided crossings. The dynamics induced by nonadiabatic…

  11. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  12. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  13. Learning-based stochastic object models for characterizing anatomical variations

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  14. A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics

    NASA Astrophysics Data System (ADS)

    Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.

    2017-12-01

    The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as other rheologies.

  15. Quantum lattice model solver HΦ

    NASA Astrophysics Data System (ADS)

    Kawamura, Mitsuaki; Yoshimi, Kazuyoshi; Misawa, Takahiro; Yamaji, Youhei; Todo, Synge; Kawashima, Naoki

    2017-08-01

    HΦ [aitch-phi ] is a program package based on the Lanczos-type eigenvalue solution applicable to a broad range of quantum lattice models, i.e., arbitrary quantum lattice models with two-body interactions, including the Heisenberg model, the Kitaev model, the Hubbard model and the Kondo-lattice model. While it works well on PCs and PC-clusters, HΦ also runs efficiently on massively parallel computers, which considerably extends the tractable range of the system size. In addition, unlike most existing packages, HΦ supports finite-temperature calculations through the method of thermal pure quantum (TPQ) states. In this paper, we explain theoretical background and user-interface of HΦ. We also show the benchmark results of HΦ on supercomputers such as the K computer at RIKEN Advanced Institute for Computational Science (AICS) and SGI ICE XA (Sekirei) at the Institute for the Solid State Physics (ISSP).

  16. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  17. Intra-organizational Computation and Complexity

    DTIC Science & Technology

    2003-01-01

    models. New methodologies, centered on understanding algorithmic complexity, are being developed that may enable us to better handle network data ...tractability of data analysis, and enable more precise theorization. A variety of measures of algorithmic complexity, e.g., Kolmogorov-Chaitin, and a...variety of proxies exist (which are often turned to for pragmatic reasons) ( Lempel and Ziv ,1976). For the most part, social and organizational

  18. Quantum simulations with noisy quantum computers

    NASA Astrophysics Data System (ADS)

    Gambetta, Jay

    Quantum computing is a new computational paradigm that is expected to lie beyond the standard model of computation. This implies a quantum computer can solve problems that can't be solved by a conventional computer with tractable overhead. To fully harness this power we need a universal fault-tolerant quantum computer. However the overhead in building such a machine is high and a full solution appears to be many years away. Nevertheless, we believe that we can build machines in the near term that cannot be emulated by a conventional computer. It is then interesting to ask what these can be used for. In this talk we will present our advances in simulating complex quantum systems with noisy quantum computers. We will show experimental implementations of this on some small quantum computers.

  19. Stochastic hybrid systems for studying biochemical processes.

    PubMed

    Singh, Abhyudai; Hespanha, João P

    2010-11-13

    Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.

  20. Approximate Bayesian computation for spatial SEIR(S) epidemic models.

    PubMed

    Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A

    2018-02-01

    Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Analytical modeling of the structureborne noise path on a small twin-engine aircraft

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III; Stokes, A. Westagard; Garrelick, J. M.; Martini, K. F.

    1988-01-01

    The structureborne noise path of a six passenger twin-engine aircraft is analyzed. Models of the wing and fuselage structures as well as the interior acoustic space of the cabin are developed and used to evaluate sensitivity to structural and acoustic parameters. Different modeling approaches are used to examine aspects of the structureborne path. These approaches are guided by a number of considerations including the geometry of the structures, the frequency range of interest, and the tractability of the computations. Results of these approaches are compared with experimental data.

  2. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    PubMed

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  3. Getting more from accuracy and response time data: methods for fitting the linear ballistic accumulator.

    PubMed

    Donkin, Chris; Averell, Lee; Brown, Scott; Heathcote, Andrew

    2009-11-01

    Cognitive models of the decision process provide greater insight into response time and accuracy than do standard ANOVA techniques. However, such models can be mathematically and computationally difficult to apply. We provide instructions and computer code for three methods for estimating the parameters of the linear ballistic accumulator (LBA), a new and computationally tractable model of decisions between two or more choices. These methods-a Microsoft Excel worksheet, scripts for the statistical program R, and code for implementation of the LBA into the Bayesian sampling software WinBUGS-vary in their flexibility and user accessibility. We also provide scripts in R that produce a graphical summary of the data and model predictions. In a simulation study, we explored the effect of sample size on parameter recovery for each method. The materials discussed in this article may be downloaded as a supplement from http://brm.psychonomic-journals.org/content/supplemental.

  4. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems

    NASA Astrophysics Data System (ADS)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-01

    We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  5. Multilinear Computing and Multilinear Algebraic Geometry

    DTIC Science & Technology

    2016-08-10

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...performance period of this project. 15. SUBJECT TERMS Tensors , multilinearity, algebraic geometry, numerical computations, computational tractability, high...Reset DISTRIBUTION A: Distribution approved for public release. DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS FOR COMPLETING

  6. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  7. Tractable Experiment Design via Mathematical Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less

  8. Robust optimization modelling with applications to industry and environmental problems

    NASA Astrophysics Data System (ADS)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  9. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  10. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  11. Optimal structure and parameter learning of Ising models

    DOE PAGES

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...

    2018-03-16

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  12. Optimal structure and parameter learning of Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  13. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  14. Sinking bubbles in stout beers

    NASA Astrophysics Data System (ADS)

    Lee, W. T.; Kaar, S.; O'Brien, S. B. G.

    2018-04-01

    A surprising phenomenon witnessed by many is the sinking bubbles seen in a settling pint of stout beer. Bubbles are less dense than the surrounding fluid so how does this happen? Previous work has shown that the explanation lies in a circulation of fluid promoted by the tilted sides of the glass. However, this work has relied heavily on computational fluid dynamics (CFD) simulations. Here, we show that the phenomenon of sinking bubbles can be predicted using a simple analytic model. To make the model analytically tractable, we work in the limit of small bubbles and consider a simplified geometry. The model confirms both the existence of sinking bubbles and the previously proposed mechanism.

  15. Local rules simulation of the kinetics of virus capsid self-assembly.

    PubMed

    Schwartz, R; Shor, P W; Prevelige, P E; Berger, B

    1998-12-01

    A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.

  16. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  17. Tractable flux-driven temperature, density, and rotation profile evolution with the quasilinear gyrokinetic transport model QuaLiKiz

    NASA Astrophysics Data System (ADS)

    Citrin, J.; Bourdelle, C.; Casson, F. J.; Angioni, C.; Bonanomi, N.; Camenen, Y.; Garbet, X.; Garzotti, L.; Görler, T.; Gürcan, O.; Koechl, F.; Imbeaux, F.; Linder, O.; van de Plassche, K.; Strand, P.; Szepesi, G.; Contributors, JET

    2017-12-01

    Quasilinear turbulent transport models are a successful tool for prediction of core tokamak plasma profiles in many regimes. Their success hinges on the reproduction of local nonlinear gyrokinetic fluxes. We focus on significant progress in the quasilinear gyrokinetic transport model QuaLiKiz (Bourdelle et al 2016 Plasma Phys. Control. Fusion 58 014036), which employs an approximated solution of the mode structures to significantly speed up computation time compared to full linear gyrokinetic solvers. Optimisation of the dispersion relation solution algorithm within integrated modelling applications leads to flux calculations × {10}6-7 faster than local nonlinear simulations. This allows tractable simulation of flux-driven dynamic profile evolution including all transport channels: ion and electron heat, main particles, impurities, and momentum. Furthermore, QuaLiKiz now includes the impact of rotation and temperature anisotropy induced poloidal asymmetry on heavy impurity transport, important for W-transport applications. Application within the JETTO integrated modelling code results in 1 s of JET plasma simulation within 10 h using 10 CPUs. Simultaneous predictions of core density, temperature, and toroidal rotation profiles for both JET hybrid and baseline experiments are presented, covering both ion and electron turbulence scales. The simulations are successfully compared to measured profiles, with agreement mostly in the 5%-25% range according to standard figures of merit. QuaLiKiz is now open source and available at www.qualikiz.com.

  18. Real-time million-synapse simulation of rat barrel cortex.

    PubMed

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development.

  19. Real-time million-synapse simulation of rat barrel cortex

    PubMed Central

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development. PMID:24910593

  20. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization.

    PubMed

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-11-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified.

  1. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  2. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  3. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  4. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  5. Physiological controls of large‐scale patterning in planarian regeneration: a molecular and computational perspective on growth and form

    PubMed Central

    Durant, Fallon; Lobo, Daniel; Hammelman, Jennifer

    2016-01-01

    Abstract Planaria are complex metazoans that repair damage to their bodies and cease remodeling when a correct anatomy has been achieved. This model system offers a unique opportunity to understand how large‐scale anatomical homeostasis emerges from the activities of individual cells. Much progress has been made on the molecular genetics of stem cell activity in planaria. However, recent data also indicate that the global pattern is regulated by physiological circuits composed of ionic and neurotransmitter signaling. Here, we overview the multi‐scale problem of understanding pattern regulation in planaria, with specific focus on bioelectric signaling via ion channels and gap junctions (electrical synapses), and computational efforts to extract explanatory models from functional and molecular data on regeneration. We present a perspective that interprets results in this fascinating field using concepts from dynamical systems theory and computational neuroscience. Serving as a tractable nexus between genetic, physiological, and computational approaches to pattern regulation, planarian pattern homeostasis harbors many deep insights for regenerative medicine, evolutionary biology, and engineering. PMID:27499881

  6. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  7. Swirling Flow Computation at the Trailing Edge of Radial-Axial Hydraulic Turbines

    NASA Astrophysics Data System (ADS)

    Susan-Resiga, Romeo; Muntean, Sebastian; Popescu, Constantin

    2016-11-01

    Modern hydraulic turbines require optimized runners within a range of operating points with respect to minimum weighted average draft tube losses and/or flow instabilities. Tractable optimization methodologies must include realistic estimations of the swirling flow exiting the runner and further ingested by the draft tube, prior to runner design. The paper presents a new mathematical model and the associated numerical algorithm for computing the swirling flow at the trailing edge of Francis turbine runner, operated at arbitrary discharge. The general turbomachinery throughflow theory is particularized for an arbitrary hub-to-shroud line in the meridian half-plane and the resulting boundary value problem is solved with the finite element method. The results obtained with the present model are validated against full 3D runner flow computations within a range of discharge value. The mathematical model incorporates the full information for the relative flow direction, as well as the curvatures of the hub-to-shroud line and meridian streamlines, respectively. It is shown that the flow direction can be frozen within a range of operating points in the neighborhood of the best efficiency regime.

  8. Hybrid RANS-LES using high order numerical methods

    NASA Astrophysics Data System (ADS)

    Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael

    2017-11-01

    Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  9. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  10. Structural modeling of Ge6.25As32.5Se61.25 using a combination of reverse Monte Carlo and Ab initio molecular dynamics.

    PubMed

    Opletal, George; Drumm, Daniel W; Wang, Rong P; Russo, Salvy P

    2014-07-03

    Ternary glass structures are notoriously difficult to model accurately, and yet prevalent in several modern endeavors. Here, a novel combination of Reverse Monte Carlo (RMC) modeling and ab initio molecular dynamics (MD) is presented, rendering these complicated structures computationally tractable. A case study (Ge6.25As32.5Se61.25 glass) illustrates the effects of ab initio MD quench rates and equilibration temperatures, and the combined approach's efficacy over standard RMC or random insertion methods. Submelting point MD quenches achieve the most stable, realistic models, agreeing with both experimental and fully ab initio results. The simple approach of RMC followed by ab initio geometry optimization provides similar quality to the RMC-MD combination, for far fewer resources.

  11. A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.

    2012-01-01

    A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.

  12. A hybrid agent-based approach for modeling microbiological systems.

    PubMed

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  13. A pull-back algorithm to determine the unloaded vascular geometry in anisotropic hyperelastic AAA passive mechanics.

    PubMed

    Riveros, Fabián; Chandra, Santanu; Finol, Ender A; Gasser, T Christian; Rodriguez, Jose F

    2013-04-01

    Biomechanical studies on abdominal aortic aneurysms (AAA) seek to provide for better decision criteria to undergo surgical intervention for AAA repair. More accurate results can be obtained by using appropriate material models for the tissues along with accurate geometric models and more realistic boundary conditions for the lesion. However, patient-specific AAA models are generated from gated medical images in which the artery is under pressure. Therefore, identification of the AAA zero pressure geometry would allow for a more realistic estimate of the aneurysmal wall mechanics. This study proposes a novel iterative algorithm to find the zero pressure geometry of patient-specific AAA models. The methodology allows considering the anisotropic hyperelastic behavior of the aortic wall, its thickness and accounts for the presence of the intraluminal thrombus. Results on 12 patient-specific AAA geometric models indicate that the procedure is computational tractable and efficient, and preserves the global volume of the model. In addition, a comparison of the peak wall stress computed with the zero pressure and CT-based geometries during systole indicates that computations using CT-based geometric models underestimate the peak wall stress by 59 ± 64 and 47 ± 64 kPa for the isotropic and anisotropic material models of the arterial wall, respectively.

  14. Tractable Chemical Models for CVD of Silicon and Carbon

    NASA Technical Reports Server (NTRS)

    Blanquet, E.; Gokoglu, S. A.

    1993-01-01

    Tractable chemical models are validated for the CVD of silicon and carbon. Dilute silane (SiH4) and methane (CH4) in hydrogen are chosen as gaseous precursors. The chemical mechanism for each systems Si and C is deliberately reduced to three reactions in the models: one in the gas phase and two at the surface. The axial-flow CVD reactor utilized in this study has well-characterized flow and thermal fields and provides variable deposition rates in the axial direction. Comparisons between the experimental and calculated deposition rates are made at different pressures and temperatures.

  15. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization

    PubMed Central

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-01-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell’s capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified. PMID:27812109

  16. Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-04-01

    Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less

  17. Modeling intelligent adversaries for terrorism risk assessment: some necessary conditions for adversary models.

    PubMed

    Guikema, Seth

    2012-07-01

    Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.

  18. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  19. Exploring the Universe with WISE and Cloud Computing

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.

    2011-01-01

    WISE is a recently-completed astronomical survey mission that has imaged the entire sky in four infrared wavelength bands. The large quantity of science images returned consists of 2,776,922 individual snapshots in various locations in each band which, along with ancillary data, totals around 110TB of raw, uncompressed data. Making the most use of this data requires advanced computing resources. I will discuss some initial attempts in the use of cloud computing to make this large problem tractable.

  20. Unfolding of Proteins: Thermal and Mechanical Unfolding

    NASA Technical Reports Server (NTRS)

    Hur, Joe S.; Darve, Eric

    2004-01-01

    We have employed a Hamiltonian model based on a self-consistent Gaussian appoximation to examine the unfolding process of proteins in external - both mechanical and thermal - force elds. The motivation was to investigate the unfolding pathways of proteins by including only the essence of the important interactions of the native-state topology. Furthermore, if such a model can indeed correctly predict the physics of protein unfolding, it can complement more computationally expensive simulations and theoretical work. The self-consistent Gaussian approximation by Micheletti et al. has been incorporated in our model to make the model mathematically tractable by signi cantly reducing the computational cost. All thermodynamic properties and pair contact probabilities are calculated by simply evaluating the values of a series of Incomplete Gamma functions in an iterative manner. We have compared our results to previous molecular dynamics simulation and experimental data for the mechanical unfolding of the giant muscle protein Titin (1TIT). Our model, especially in light of its simplicity and excellent agreement with experiment and simulation, demonstrates the basic physical elements necessary to capture the mechanism of protein unfolding in an external force field.

  1. Dynamic Redox Regulation of IL-4 Signaling.

    PubMed

    Dwivedi, Gaurav; Gran, Margaret A; Bagchi, Pritha; Kemp, Melissa L

    2015-11-01

    Quantifying the magnitude and dynamics of protein oxidation during cell signaling is technically challenging. Computational modeling provides tractable, quantitative methods to test hypotheses of redox mechanisms that may be simultaneously operative during signal transduction. The interleukin-4 (IL-4) pathway, which has previously been reported to induce reactive oxygen species and oxidation of PTP1B, may be controlled by several other putative mechanisms of redox regulation; widespread proteomic thiol oxidation observed via 2D redox differential gel electrophoresis upon IL-4 treatment suggests more than one redox-sensitive protein implicated in this pathway. Through computational modeling and a model selection strategy that relied on characteristic STAT6 phosphorylation dynamics of IL-4 signaling, we identified reversible protein tyrosine phosphatase (PTP) oxidation as the primary redox regulatory mechanism in the pathway. A systems-level model of IL-4 signaling was developed that integrates synchronous pan-PTP oxidation with ROS-independent mechanisms. The model quantitatively predicts the dynamics of IL-4 signaling over a broad range of new redox conditions, offers novel hypotheses about regulation of JAK/STAT signaling, and provides a framework for interrogating putative mechanisms involving receptor-initiated oxidation.

  2. Dynamic Redox Regulation of IL-4 Signaling

    PubMed Central

    Dwivedi, Gaurav; Gran, Margaret A.; Bagchi, Pritha; Kemp, Melissa L.

    2015-01-01

    Quantifying the magnitude and dynamics of protein oxidation during cell signaling is technically challenging. Computational modeling provides tractable, quantitative methods to test hypotheses of redox mechanisms that may be simultaneously operative during signal transduction. The interleukin-4 (IL-4) pathway, which has previously been reported to induce reactive oxygen species and oxidation of PTP1B, may be controlled by several other putative mechanisms of redox regulation; widespread proteomic thiol oxidation observed via 2D redox differential gel electrophoresis upon IL-4 treatment suggests more than one redox-sensitive protein implicated in this pathway. Through computational modeling and a model selection strategy that relied on characteristic STAT6 phosphorylation dynamics of IL-4 signaling, we identified reversible protein tyrosine phosphatase (PTP) oxidation as the primary redox regulatory mechanism in the pathway. A systems-level model of IL-4 signaling was developed that integrates synchronous pan-PTP oxidation with ROS-independent mechanisms. The model quantitatively predicts the dynamics of IL-4 signaling over a broad range of new redox conditions, offers novel hypotheses about regulation of JAK/STAT signaling, and provides a framework for interrogating putative mechanisms involving receptor-initiated oxidation. PMID:26562652

  3. Anytime Prediction: Efficient Ensemble Methods for Any Computational Budget

    DTIC Science & Technology

    2014-01-21

    difficult problem and is the focus of this work. 1.1 Motivation The number of machine learning applications which involve real time and latency sensitive pre...significantly increasing latency , and the computational costs associated with hosting a service are often critical to its viability. For such...balancing training costs, concerns such as scalability and tractability are often more important, as opposed to factors such as latency which are more

  4. Stochastic Modeling and Generation of Partially Polarized or Partially Coherent Electromagnetic Waves

    NASA Technical Reports Server (NTRS)

    Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.

  5. A two-fluid model for avalanche and debris flows.

    PubMed

    Pitman, E Bruce; Le, Long

    2005-07-15

    Geophysical mass flows--debris flows, avalanches, landslides--can contain O(10(6)-10(10)) m(3) or more of material, often a mixture of soil and rocks with a significant quantity of interstitial fluid. These flows can be tens of meters in depth and hundreds of meters in length. The range of scales and the rheology of this mixture presents significant modelling and computational challenges. This paper describes a depth-averaged 'thin layer' model of geophysical mass flows containing a mixture of solid material and fluid. The model is derived from a 'two-phase' or 'two-fluid' system of equations commonly used in engineering research. Phenomenological modelling and depth averaging combine to yield a tractable set of equations, a hyperbolic system that describes the motion of the two constituent phases. If the fluid inertia is small, a reduced model system that is easier to solve may be derived.

  6. The tangled bank of amino acids

    PubMed Central

    Pollock, David D.

    2016-01-01

    Abstract The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. PMID:27028523

  7. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  8. Annual Rainfall Forecasting by Using Mamdani Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Fallah-Ghalhary, G.-A.; Habibi Nokhandan, M.; Mousavi Baygi, M.

    2009-04-01

    Long-term rainfall prediction is very important to countries thriving on agro-based economy. In general, climate and rainfall are highly non-linear phenomena in nature giving rise to what is known as "butterfly effect". The parameters that are required to predict the rainfall are enormous even for a short period. Soft computing is an innovative approach to construct computationally intelligent systems that are supposed to possess humanlike expertise within a specific domain, adapt themselves and learn to do better in changing environments, and explain how they make decisions. Unlike conventional artificial intelligence techniques the guiding principle of soft computing is to exploit tolerance for imprecision, uncertainty, robustness, partial truth to achieve tractability, and better rapport with reality. In this paper, 33 years of rainfall data analyzed in khorasan state, the northeastern part of Iran situated at latitude-longitude pairs (31°-38°N, 74°- 80°E). this research attempted to train Fuzzy Inference System (FIS) based prediction models with 33 years of rainfall data. For performance evaluation, the model predicted outputs were compared with the actual rainfall data. Simulation results reveal that soft computing techniques are promising and efficient. The test results using by FIS model showed that the RMSE was obtained 52 millimeter.

  9. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  10. Automated adaptive inference of phenomenological dynamical models.

    PubMed

    Daniels, Bryan C; Nemenman, Ilya

    2015-08-21

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  11. Mathematical modeling of spinning elastic bodies for modal analysis.

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Barbera, F. J.; Baddeley, V.

    1973-01-01

    The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.

  12. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  13. A non-linear programming approach to the computer-aided design of regulators using a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1985-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.

  14. Simulating Biomass Fast Pyrolysis at the Single Particle Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciesielski, Peter; Wiggins, Gavin; Daw, C Stuart

    2017-07-01

    Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level ofmore » structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.« less

  15. Analysis of SI models with multiple interacting populations using subpopulations.

    PubMed

    Thomas, Evelyn K; Gurski, Katharine F; Hoffman, Kathleen A

    2015-02-01

    Computing endemic equilibria and basic reproductive numbers for systems of differential equations describing epidemiological systems with multiple connections between subpopulations is often algebraically intractable. We present an alternative method which deconstructs the larger system into smaller subsystems and captures the interactions between the smaller systems as external forces using an approximate model. We bound the basic reproductive numbers of the full system in terms of the basic reproductive numbers of the smaller systems and use the alternate model to provide approximations for the endemic equilibrium. In addition to creating algebraically tractable reproductive numbers and endemic equilibria, we can demonstrate the influence of the interactions between subpopulations on the basic reproductive number of the full system. The focus of this paper is to provide analytical tools to help guide public health decisions with limited intervention resources.

  16. Computing Role Assignments of Proper Interval Graphs in Polynomial Time

    NASA Astrophysics Data System (ADS)

    Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël

    A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.

  17. Conformational diversity and computational enzyme design

    PubMed Central

    Lassila, Jonathan K.

    2010-01-01

    The application of computational protein design methods to the design of enzyme active sites offers potential routes to new catalysts and new reaction specificities. Computational design methods have typically treated the protein backbone as a rigid structure for the sake of computational tractability. However, this fixed-backbone approximation introduces its own special challenges for enzyme design and it contrasts with an emerging picture of natural enzymes as dynamic ensembles with multiple conformations and motions throughout a reaction cycle. This review considers the impact of conformational variation and dynamics on computational enzyme design and it highlights new approaches to addressing protein conformational diversity in enzyme design including recent advances in multistate design, backbone flexibility, and computational library design. PMID:20829099

  18. Empirical analysis of RNA robustness and evolution using high-throughput sequencing of ribozyme reactions.

    PubMed

    Hayden, Eric J

    2016-08-15

    RNA molecules provide a realistic but tractable model of a genotype to phenotype relationship. This relationship has been extensively investigated computationally using secondary structure prediction algorithms. Enzymatic RNA molecules, or ribozymes, offer access to genotypic and phenotypic information in the laboratory. Advancements in high-throughput sequencing technologies have enabled the analysis of sequences in the lab that now rivals what can be accomplished computationally. This has motivated a resurgence of in vitro selection experiments and opened new doors for the analysis of the distribution of RNA functions in genotype space. A body of computational experiments has investigated the persistence of specific RNA structures despite changes in the primary sequence, and how this mutational robustness can promote adaptations. This article summarizes recent approaches that were designed to investigate the role of mutational robustness during the evolution of RNA molecules in the laboratory, and presents theoretical motivations, experimental methods and approaches to data analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. PRIMELT3 MEGA.XLSM software for primary magma calculation: Peridotite primary magma MgO contents from the liquidus to the solidus

    NASA Astrophysics Data System (ADS)

    Herzberg, C.; Asimow, P. D.

    2015-02-01

    An upgrade of the PRIMELT algorithm for calculating primary magma composition is given together with its implementation in PRIMELT3 MEGA.xlsm software. It supersedes PRIMELT2.xls in correcting minor mistakes in melt fraction and computed Ni content of olivine, it identifies residuum mineralogy, and it provides a thorough analysis of uncertainties in mantle potential temperature and olivine liquidus temperature. The uncertainty analysis was made tractable by the computation of olivine liquidus temperatures as functions of pressure and partial melt MgO content between the liquidus and solidus. We present a computed anhydrous peridotite solidus in T-P space using relations amongst MgO, T and P along the solidus; it compares well with experiments on the solidus. Results of the application of PRIMELT3 to a wide range of basalts shows that the mantle sources of ocean islands and large igneous provinces were hotter than oceanic spreading centers, consistent with earlier studies and expectations of the mantle plume model.

  20. A Comparison of Solver Performance for Complex Gastric Electrophysiology Models

    PubMed Central

    Sathar, Shameer; Cheng, Leo K.; Trew, Mark L.

    2016-01-01

    Computational techniques for solving systems of equations arising in gastric electrophysiology have not been studied for efficient solution process. We present a computationally challenging problem of simulating gastric electrophysiology in anatomically realistic stomach geometries with multiple intracellular and extracellular domains. The multiscale nature of the problem and mesh resolution required to capture geometric and functional features necessitates efficient solution methods if the problem is to be tractable. In this study, we investigated and compared several parallel preconditioners for the linear systems arising from tetrahedral discretisation of electrically isotropic and anisotropic problems, with and without stimuli. The results showed that the isotropic problem was computationally less challenging than the anisotropic problem and that the application of extracellular stimuli increased workload considerably. Preconditioning based on block Jacobi and algebraic multigrid solvers were found to have the best overall solution times and least iteration counts, respectively. The algebraic multigrid preconditioner would be expected to perform better on large problems. PMID:26736543

  1. Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction

    NASA Astrophysics Data System (ADS)

    Mach, J. C.; Budrow, C. J.; Pagan, D. C.; Ruff, J. P. C.; Park, J.-S.; Okasinski, J.; Beaudoin, A. J.; Miller, M. P.

    2017-05-01

    Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present work, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to develop significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. The experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.

  2. Quantum simulation of transverse Ising models with Rydberg atoms

    NASA Astrophysics Data System (ADS)

    Schauss, Peter

    2018-04-01

    Quantum Ising models are canonical models for the study of quantum phase transitions (Sachdev 1999 Quantum Phase Transitions (Cambridge: Cambridge University Press)) and are the underlying concept for many analogue quantum computing and quantum annealing ideas (Tanaka et al Quantum Spin Glasses, Annealing and Computation (Cambridge: Cambridge University Press)). Here we focus on the implementation of finite-range interacting Ising spin models, which are barely tractable numerically. Recent experiments with cold atoms have reached the interaction-dominated regime in quantum Ising magnets via optical coupling of trapped neutral atoms to Rydberg states. This approach allows for the tunability of all relevant terms in an Ising spin Hamiltonian with 1/{r}6 interactions in transverse and longitudinal fields. This review summarizes the recent progress of these implementations in Rydberg lattices with site-resolved detection. Strong correlations in quantum Ising models have been observed in several experiments, starting from a single excitation in the superatom regime up to the point of crystallization. The rapid progress in this field makes spin systems based on Rydberg atoms a promising platform for quantum simulation because of the unmatched flexibility and strength of interactions combined with high control and good isolation from the environment.

  3. Bayesian calibration for electrochemical thermal model of lithium-ion cells

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-07-01

    Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.

  4. Coarse-grained modeling of RNA 3D structure.

    PubMed

    Dawson, Wayne K; Maciejczyk, Maciej; Jankowska, Elzbieta J; Bujnicki, Janusz M

    2016-07-01

    Functional RNA molecules depend on three-dimensional (3D) structures to carry out their tasks within the cell. Understanding how these molecules interact to carry out their biological roles requires a detailed knowledge of RNA 3D structure and dynamics as well as thermodynamics, which strongly governs the folding of RNA and RNA-RNA interactions as well as a host of other interactions within the cellular environment. Experimental determination of these properties is difficult, and various computational methods have been developed to model the folding of RNA 3D structures and their interactions with other molecules. However, computational methods also have their limitations, especially when the biological effects demand computation of the dynamics beyond a few hundred nanoseconds. For the researcher confronted with such challenges, a more amenable approach is to resort to coarse-grained modeling to reduce the number of data points and computational demand to a more tractable size, while sacrificing as little critical information as possible. This review presents an introduction to the topic of coarse-grained modeling of RNA 3D structures and dynamics, covering both high- and low-resolution strategies. We discuss how physics-based approaches compare with knowledge based methods that rely on databases of information. In the course of this review, we discuss important aspects in the reasoning process behind building different models and the goals and pitfalls that can result. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. High-resolution mapping of bifurcations in nonlinear biochemical circuits

    NASA Astrophysics Data System (ADS)

    Genot, A. J.; Baccouche, A.; Sieskind, R.; Aubert-Kato, N.; Bredeche, N.; Bartolo, J. F.; Taly, V.; Fujii, T.; Rondelez, Y.

    2016-08-01

    Analog molecular circuits can exploit the nonlinear nature of biochemical reaction networks to compute low-precision outputs with fewer resources than digital circuits. This analog computation is similar to that employed by gene-regulation networks. Although digital systems have a tractable link between structure and function, the nonlinear and continuous nature of analog circuits yields an intricate functional landscape, which makes their design counter-intuitive, their characterization laborious and their analysis delicate. Here, using droplet-based microfluidics, we map with high resolution and dimensionality the bifurcation diagrams of two synthetic, out-of-equilibrium and nonlinear programs: a bistable DNA switch and a predator-prey DNA oscillator. The diagrams delineate where function is optimal, dynamics bifurcates and models fail. Inverse problem solving on these large-scale data sets indicates interference from enzymatic coupling. Additionally, data mining exposes the presence of rare, stochastically bursting oscillators near deterministic bifurcations.

  6. Gyrofluid Modeling of Turbulent, Kinetic Physics

    NASA Astrophysics Data System (ADS)

    Despain, Kate Marie

    2011-12-01

    Gyrofluid models to describe plasma turbulence combine the advantages of fluid models, such as lower dimensionality and well-developed intuition, with those of gyrokinetics models, such as finite Larmor radius (FLR) effects. This allows gyrofluid models to be more tractable computationally while still capturing much of the physics related to the FLR of the particles. We present a gyrofluid model derived to capture the behavior of slow solar wind turbulence and describe the computer code developed to implement the model. In addition, we describe the modifications we made to a gyrofluid model and code that simulate plasma turbulence in tokamak geometries. Specifically, we describe a nonlinear phase mixing phenomenon, part of the E x B term, that was previously missing from the model. An inherently FLR effect, it plays an important role in predicting turbulent heat flux and diffusivity levels for the plasma. We demonstrate this importance by comparing results from the updated code to studies done previously by gyrofluid and gyrokinetic codes. We further explain what would be necessary to couple the updated gyrofluid code, gryffin, to a turbulent transport code, thus allowing gryffin to play a role in predicting profiles for fusion devices such as ITER and to explore novel fusion configurations. Such a coupling would require the use of Graphical Processing Units (GPUs) to make the modeling process fast enough to be viable. Consequently, we also describe our experience with GPU computing and demonstrate that we are poised to complete a gryffin port to this innovative architecture.

  7. Chemical vapor deposition fluid flow simulation modelling tool

    NASA Technical Reports Server (NTRS)

    Bullister, Edward T.

    1992-01-01

    Accurate numerical simulation of chemical vapor deposition (CVD) processes requires a general purpose computational fluid dynamics package combined with specialized capabilities for high temperature chemistry. In this report, we describe the implementation of these specialized capabilities in the spectral element code NEKTON. The thermal expansion of the gases involved is shown to be accurately approximated by the low Mach number perturbation expansion of the incompressible Navier-Stokes equations. The radiative heat transfer between multiple interacting radiating surfaces is shown to be tractable using the method of Gebhart. The disparate rates of reaction and diffusion in CVD processes are calculated via a point-implicit time integration scheme. We demonstrate the use above capabilities on prototypical CVD applications.

  8. Bounded-Degree Approximations of Stochastic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less

  9. Computing the non-Markovian coarse-grained interactions derived from the Mori-Zwanzig formalism in molecular systems: Application to polymer melts

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em

    2017-01-01

    Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.

  10. Birth/birth-death processes and their computable transition probabilities with biological applications.

    PubMed

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  11. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  12. On dependency properties of the ISIs generated by a two-compartmental neuronal model.

    PubMed

    Benedetto, Elisa; Sacerdote, Laura

    2013-02-01

    One-dimensional leaky integrate and fire neuronal models describe interspike intervals (ISIs) of a neuron as a renewal process and disregarding the neuron geometry. Many multi-compartment models account for the geometrical features of the neuron but are too complex for their mathematical tractability. Leaky integrate and fire two-compartment models seem a good compromise between mathematical tractability and an improved realism. They indeed allow to relax the renewal hypothesis, typical of one-dimensional models, without introducing too strong mathematical difficulties. Here, we pursue the analysis of the two-compartment model studied by Lansky and Rodriguez (Phys D 132:267-286, 1999), aiming of introducing some specific mathematical results used together with simulation techniques. With the aid of these methods, we investigate dependency properties of ISIs for different values of the model parameters. We show that an increase of the input increases the strength of the dependence between successive ISIs.

  13. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  14. Task-based data-acquisition optimization for sparse image reconstruction systems

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  15. Evaluating the effects of real power losses in optimal power flow based storage integration

    DOE PAGES

    Castillo, Anya; Gayme, Dennice

    2017-03-27

    This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less

  16. The tangled bank of amino acids.

    PubMed

    Goldstein, Richard A; Pollock, David D

    2016-07-01

    The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. © 2016 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  17. Cooperative inference: Features, objects, and collections.

    PubMed

    Searcy, Sophia Ray; Shafto, Patrick

    2016-10-01

    Cooperation plays a central role in theories of development, learning, cultural evolution, and education. We argue that existing models of learning from cooperative informants have fundamental limitations that prevent them from explaining how cooperation benefits learning. First, existing models are shown to be computationally intractable, suggesting that they cannot apply to realistic learning problems. Second, existing models assume a priori agreement about which concepts are favored in learning, which leads to a conundrum: Learning fails without precise agreement on bias yet there is no single rational choice. We introduce cooperative inference, a novel framework for cooperation in concept learning, which resolves these limitations. Cooperative inference generalizes the notion of cooperation used in previous models from omission of labeled objects to the omission values of features, labels for objects, and labels for collections of objects. The result is an approach that is computationally tractable, does not require a priori agreement about biases, applies to both Boolean and first-order concepts, and begins to approximate the richness of real-world concept learning problems. We conclude by discussing relations to and implications for existing theories of cognition, cognitive development, and cultural evolution. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  19. Formal Requirements-Based Programming for Complex Systems

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis

    2005-01-01

    Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.

  20. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  1. Baldovin-Stella stochastic volatility process and Wiener process mixtures

    NASA Astrophysics Data System (ADS)

    Peirano, P. P.; Challet, D.

    2012-08-01

    Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.

  2. Cheese rind communities provide tractable systems for in situ and in vitro studies of microbial diversity

    PubMed Central

    Wolfe, Benjamin E.; Button, Julie E.; Santarelli, Marcela; Dutton, Rachel J.

    2014-01-01

    SUMMARY Tractable microbial communities are needed to bridge the gap between observations of patterns of microbial diversity and mechanisms that can explain these patterns. We developed cheese rinds as model microbial communities by characterizing in situ patterns of diversity and by developing an in vitro system for community reconstruction. Sequencing of 137 different rind communities across 10 countries revealed 24 widely distributed and culturable genera of bacteria and fungi as dominant community members. Reproducible community types formed independent of geographic location of production. Intensive temporal sampling demonstrated that assembly of these communities is highly reproducible. Patterns of community composition and succession observed in situ can be recapitulated in a simple in vitro system. Widespread positive and negative interactions were identified between bacterial and fungal community members. Cheese rind microbial communities represent an experimentally tractable system for defining mechanisms that influence microbial community assembly and function. PMID:25036636

  3. First-Principles Propagation of Geoelectric Fields from Ionosphere to Ground using LANLGeoRad

    NASA Astrophysics Data System (ADS)

    Jeffery, C. A.; Woodroffe, J. R.; Henderson, M. G.

    2017-12-01

    A notable deficiency in the current SW forecasting chain is the propagation of geoelectric fields from ionosphere to ground using Biot-Savart integrals, which ignore the localized complexity of lithospheric electrical conductivity and the relatively high conductivity of ocean water compared to the lithosphere. Three-dimensional models of Earth conductivity with mesoscale spatial resolution are being developed, but a new approach is needed to incorporate this information into the SW forecast chain. We present initial results from a first-principles geoelectric propagation model call LANLGeoRad, which solves Maxwell's equations on an unstructured geodesic grid. Challenges associated with the disparate response times of millisecond electromagnetic propagation and 10-second geomagnetic fluctuations are highlighted, and a novel rescaling of the ionosphere/ground system is presented that renders this geoelectric system computationally tractable.

  4. Electrical Wave Propagation in a Minimally Realistic Fiber Architecture Model of the Left Ventricle

    NASA Astrophysics Data System (ADS)

    Song, Xianfeng; Setayeshgar, Sima

    2006-03-01

    Experimental results indicate a nested, layered geometry for the fiber surfaces of the left ventricle, where fiber directions are approximately aligned in each surface and gradually rotate through the thickness of the ventricle. Numerical and analytical results have highlighted the importance of this rotating anisotropy and its possible destabilizing role on the dynamics of scroll waves in excitable media with application to the heart. Based on the work of Peskin[1] and Peskin and McQueen[2], we present a minimally realistic model of the left ventricle that adequately captures the geometry and anisotropic properties of the heart as a conducting medium while being easily parallelizable, and computationally more tractable than fully realistic anatomical models. Complementary to fully realistic and anatomically-based computational approaches, studies using such a minimal model with the addition of successively realistic features, such as excitation-contraction coupling, should provide unique insight into the basic mechanisms of formation and obliteration of electrical wave instabilities. We describe our construction, implementation and validation of this model. [1] C. S. Peskin, Communications on Pure and Applied Mathematics 42, 79 (1989). [2] C. S. Peskin and D. M. McQueen, in Case Studies in Mathematical Modeling: Ecology, Physiology, and Cell Biology, 309(1996)

  5. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  6. Value-based decision making via sequential sampling with hierarchical competition and attentional modulation

    PubMed Central

    2017-01-01

    In principle, formal dynamical models of decision making hold the potential to represent fundamental computations underpinning value-based (i.e., preferential) decisions in addition to perceptual decisions. Sequential-sampling models such as the race model and the drift-diffusion model that are grounded in simplicity, analytical tractability, and optimality remain popular, but some of their more recent counterparts have instead been designed with an aim for more feasibility as architectures to be implemented by actual neural systems. Connectionist models are proposed herein at an intermediate level of analysis that bridges mental phenomena and underlying neurophysiological mechanisms. Several such models drawing elements from the established race, drift-diffusion, feedforward-inhibition, divisive-normalization, and competing-accumulator models were tested with respect to fitting empirical data from human participants making choices between foods on the basis of hedonic value rather than a traditional perceptual attribute. Even when considering performance at emulating behavior alone, more neurally plausible models were set apart from more normative race or drift-diffusion models both quantitatively and qualitatively despite remaining parsimonious. To best capture the paradigm, a novel six-parameter computational model was formulated with features including hierarchical levels of competition via mutual inhibition as well as a static approximation of attentional modulation, which promotes “winner-take-all” processing. Moreover, a meta-analysis encompassing several related experiments validated the robustness of model-predicted trends in humans’ value-based choices and concomitant reaction times. These findings have yet further implications for analysis of neurophysiological data in accordance with computational modeling, which is also discussed in this new light. PMID:29077746

  7. Value-based decision making via sequential sampling with hierarchical competition and attentional modulation.

    PubMed

    Colas, Jaron T

    2017-01-01

    In principle, formal dynamical models of decision making hold the potential to represent fundamental computations underpinning value-based (i.e., preferential) decisions in addition to perceptual decisions. Sequential-sampling models such as the race model and the drift-diffusion model that are grounded in simplicity, analytical tractability, and optimality remain popular, but some of their more recent counterparts have instead been designed with an aim for more feasibility as architectures to be implemented by actual neural systems. Connectionist models are proposed herein at an intermediate level of analysis that bridges mental phenomena and underlying neurophysiological mechanisms. Several such models drawing elements from the established race, drift-diffusion, feedforward-inhibition, divisive-normalization, and competing-accumulator models were tested with respect to fitting empirical data from human participants making choices between foods on the basis of hedonic value rather than a traditional perceptual attribute. Even when considering performance at emulating behavior alone, more neurally plausible models were set apart from more normative race or drift-diffusion models both quantitatively and qualitatively despite remaining parsimonious. To best capture the paradigm, a novel six-parameter computational model was formulated with features including hierarchical levels of competition via mutual inhibition as well as a static approximation of attentional modulation, which promotes "winner-take-all" processing. Moreover, a meta-analysis encompassing several related experiments validated the robustness of model-predicted trends in humans' value-based choices and concomitant reaction times. These findings have yet further implications for analysis of neurophysiological data in accordance with computational modeling, which is also discussed in this new light.

  8. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  9. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  10. Hybrid regulatory models: a statistically tractable approach to model regulatory network dynamics.

    PubMed

    Ocone, Andrea; Millar, Andrew J; Sanguinetti, Guido

    2013-04-01

    Computational modelling of the dynamics of gene regulatory networks is a central task of systems biology. For networks of small/medium scale, the dominant paradigm is represented by systems of coupled non-linear ordinary differential equations (ODEs). ODEs afford great mechanistic detail and flexibility, but calibrating these models to data is often an extremely difficult statistical problem. Here, we develop a general statistical inference framework for stochastic transcription-translation networks. We use a coarse-grained approach, which represents the system as a network of stochastic (binary) promoter and (continuous) protein variables. We derive an exact inference algorithm and an efficient variational approximation that allows scalable inference and learning of the model parameters. We demonstrate the power of the approach on two biological case studies, showing that the method allows a high degree of flexibility and is capable of testable novel biological predictions. http://homepages.inf.ed.ac.uk/gsanguin/software.html. Supplementary data are available at Bioinformatics online.

  11. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  12. On-the-fly scheduling as a manifestation of partial-order planning and dynamic task values.

    PubMed

    Hannah, Samuel D; Neal, Andrew

    2014-09-01

    The aim of this study was to develop a computational account of the spontaneous task ordering that occurs within jobs as work unfolds ("on-the-fly task scheduling"). Air traffic control is an example of work in which operators have to schedule their tasks as a partially predictable work flow emerges. To date, little attention has been paid to such on-the-fly scheduling situations. We present a series of discrete-event models fit to conflict resolution decision data collected from experienced controllers operating in a high-fidelity simulation. Our simulations reveal air traffic controllers' scheduling decisions as examples of the partial-order planning approach of Hayes-Roth and Hayes-Roth. The most successful model uses opportunistic first-come-first-served scheduling to select tasks from a queue. Tasks with short deadlines are executed immediately. Tasks with long deadlines are evaluated to assess whether they need to be executed immediately or deferred. On-the-fly task scheduling is computationally tractable despite its surface complexity and understandable as an example of both the partial-order planning strategy and the dynamic-value approach to prioritization.

  13. Distributed sensor networks: a cellular nonlinear network perspective.

    PubMed

    Haenggi, Martin

    2003-12-01

    Large-scale networks of integrated wireless sensors become increasingly tractable. Advances in hardware technology and engineering design have led to dramatic reductions in size, power consumption, and cost for digital circuitry, and wireless communications. Networking, self-organization, and distributed operation are crucial ingredients to harness the sensing, computing, and computational capabilities of the nodes into a complete system. This article shows that those networks can be considered as cellular nonlinear networks (CNNs), and that their analysis and design may greatly benefit from the rich theoretical results available for CNNs.

  14. Towards a Computational Framework for Modeling the Impact of Aortic Coarctations Upon Left Ventricular Load

    PubMed Central

    Karabelas, Elias; Gsell, Matthias A. F.; Augustin, Christoph M.; Marx, Laura; Neic, Aurel; Prassl, Anton J.; Goubergrits, Leonid; Kuehne, Titus; Plank, Gernot

    2018-01-01

    Computational fluid dynamics (CFD) models of blood flow in the left ventricle (LV) and aorta are important tools for analyzing the mechanistic links between myocardial deformation and flow patterns. Typically, the use of image-based kinematic CFD models prevails in applications such as predicting the acute response to interventions which alter LV afterload conditions. However, such models are limited in their ability to analyze any impacts upon LV load or key biomarkers known to be implicated in driving remodeling processes as LV function is not accounted for in a mechanistic sense. This study addresses these limitations by reporting on progress made toward a novel electro-mechano-fluidic (EMF) model that represents the entire physics of LV electromechanics (EM) based on first principles. A biophysically detailed finite element (FE) model of LV EM was coupled with a FE-based CFD solver for moving domains using an arbitrary Eulerian-Lagrangian (ALE) formulation. Two clinical cases of patients suffering from aortic coarctations (CoA) were built and parameterized based on clinical data under pre-treatment conditions. For one patient case simulations under post-treatment conditions after geometric repair of CoA by a virtual stenting procedure were compared against pre-treatment results. Numerical stability of the approach was demonstrated by analyzing mesh quality and solver performance under the significantly large deformations of the LV blood pool. Further, computational tractability and compatibility with clinical time scales were investigated by performing strong scaling benchmarks up to 1536 compute cores. The overall cost of the entire workflow for building, fitting and executing EMF simulations was comparable to those reported for image-based kinematic models, suggesting that EMF models show potential of evolving into a viable clinical research tool. PMID:29892227

  15. Design of orbital debris shields for oblique hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1994-01-01

    A new impact debris propagation code was written to link CTH simulations of space debris shield perforation to the Lagrangian finite element code DYNA3D, for space structure wall impact simulations. This software (DC3D) simulates debris cloud evolution using a nonlinear elastic-plastic deformable particle dynamics model, and renders computationally tractable the supercomputer simulation of oblique impacts on Whipple shield protected structures. Comparison of three dimensional, oblique impact simulations with experimental data shows good agreement over a range of velocities of interest in the design of orbital debris shielding. Source code developed during this research is provided on the enclosed floppy disk. An abstract based on the work described was submitted to the 1994 Hypervelocity Impact Symposium.

  16. Solving the quantum many-body problem with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Troyer, Matthias

    2017-02-01

    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

  17. Answering Schrödinger's question: A free-energy formulation

    NASA Astrophysics Data System (ADS)

    Ramstead, Maxwell James Désormeau; Badcock, Paul Benjamin; Friston, Karl John

    2018-03-01

    The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen's four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery.

  18. Economics and computer science of a radio spectrum reallocation.

    PubMed

    Leyton-Brown, Kevin; Milgrom, Paul; Segal, Ilya

    2017-07-11

    The recent "incentive auction" of the US Federal Communications Commission was the first auction to reallocate radio frequencies between two different kinds of uses: from broadcast television to wireless Internet access. The design challenge was not just to choose market rules to govern a fixed set of potential trades but also, to determine the broadcasters' property rights, the goods to be exchanged, the quantities to be traded, the computational procedures, and even some of the performance objectives. An essential and unusual challenge was to make the auction simple enough for human participants while still ensuring that the computations would be tractable and capable of delivering nearly efficient outcomes.

  19. Learning planar Ising models

    DOE PAGES

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...

    2016-12-01

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  20. Learning planar Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  1. Defect Genome of Cubic Perovskites for Fuel Cell Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.

    Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less

  2. Defect Genome of Cubic Perovskites for Fuel Cell Applications

    DOE PAGES

    Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.; ...

    2017-10-10

    Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less

  3. Geometric Modeling of Inclusions as Ellipsoids

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J.

    2008-01-01

    Nonmetallic inclusions in gas turbine disk alloys can have a significant detrimental impact on fatigue life. Because large inclusions that lead to anomalously low lives occur infrequently, probabilistic approaches can be utilized to avoid the excessively conservative assumption of lifing to a large inclusion in a high stress location. A prerequisite to modeling the impact of inclusions on the fatigue life distribution is a characterization of the inclusion occurrence rate and size distribution. To help facilitate this process, a geometric simulation of the inclusions was devised. To make the simulation problem tractable, the irregularly sized and shaped inclusions were modeled as arbitrarily oriented, three independent dimensioned, ellipsoids. Random orientation of the ellipsoid is accomplished through a series of three orthogonal rotations of axes. In this report, a set of mathematical models for the following parameters are described: the intercepted area of a randomly sectioned ellipsoid, the dimensions and orientation of the intercepted ellipse, the area of a randomly oriented sectioned ellipse, the depth and width of a randomly oriented sectioned ellipse, and the projected area of a randomly oriented ellipsoid. These parameters are necessary to determine an inclusion s potential to develop a propagating fatigue crack. Without these mathematical models, computationally expensive search algorithms would be required to compute these parameters.

  4. Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction

    DOE PAGES

    Mach, J. C.; Budrow, C. J.; Pagan, D. C.; ...

    2017-03-15

    Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present paper, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to developmore » significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. Finally, the experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.« less

  5. Autonomous Guidance of Agile Small-scale Rotorcraft

    NASA Technical Reports Server (NTRS)

    Mettler, Bernard; Feron, Eric

    2004-01-01

    This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.

  6. Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mach, J. C.; Budrow, C. J.; Pagan, D. C.

    Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present paper, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to developmore » significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. Finally, the experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.« less

  7. Unified sensor management in unknown dynamic clutter

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2010-04-01

    In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with maximization of a "natural" sensor management objective function called PENT (posterior expected number of targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.

  8. Renewal of the Attentive Sensing Project

    DTIC Science & Technology

    2006-02-07

    decisions about target presence or absence, is denoted track before detect . We have investigated joint tracking and detection in the context of the foveal...computationally tractable bounds. 4 Task 2: Sensor Configuration for Tracking and Track Before Detect Task 2 consisted of investigation of attentive...strategy to multiple targets and to track before detect sensors. To apply principles developed in the context of foveal sensors to more immediately

  9. Generation of anisotropy in turbulent flows subjected to rapid distortion

    NASA Astrophysics Data System (ADS)

    Clark, Timothy T.; Kurien, Susan; Rubinstein, Robert

    2018-01-01

    A computational tool for the anisotropic time-evolution of the spectral velocity correlation tensor is presented. We operate in the linear, rapid distortion limit of the mean-field-coupled equations. Each term of the equations is written in the form of an expansion to arbitrary order in the basis of irreducible representations of the SO(3) symmetry group. The computational algorithm for this calculation solves a system of coupled equations for the scalar weights of each generated anisotropic mode. The analysis demonstrates that rapid distortion rapidly but systematically generates higher-order anisotropic modes. To maintain a tractable computation, the maximum number of rotational modes to be used in a given calculation is specified a priori. The computed Reynolds stress converges to the theoretical result derived by Batchelor and Proudman [Quart. J. Mech. Appl. Math. 7, 83 (1954), 10.1093/qjmam/7.1.83] if a sufficiently large maximum number of rotational modes is utilized; more modes are required to recover the solution at later times. The emergence and evolution of the underlying multidimensional space of functions is presented here using a 64-mode calculation. Alternative implications for modeling strategies are discussed.

  10. Framework for cascade size calculations on random networks

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Schweitzer, Frank

    2018-04-01

    We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.

  11. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  12. Rotorcraft application of advanced computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stanaway, Sharon

    1991-01-01

    The objective was to develop the capability to compute the unsteady viscous flow around rotor-body combinations. In the interest of tractability, the problem was divided into subprograms for: (1) computing the flow around a rotor blade in isolation; (2) computing the flow around a fuselage in isolation, and (3) integrating the pieces. Considerable progress has already been made by others toward computing the rotor in isolation (Srinivasen) and this work focused on the remaining tasks. These tasks required formulating a multi-block strategy for combining rotating blades and nonrotating components (i.e., a fuselage). Then an appropriate configuration was chosen for which suitable rotor body interference test data exists. Next, surface and volume grids were generated and state-of-the-art CFD codes were modified and applied to the problem.

  13. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    PubMed Central

    2012-01-01

    Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878

  14. An Empirical Polarizable Force Field Based on the Classical Drude Oscillator Model: Development History and Recent Applications

    PubMed Central

    2016-01-01

    Molecular mechanics force fields that explicitly account for induced polarization represent the next generation of physical models for molecular dynamics simulations. Several methods exist for modeling induced polarization, and here we review the classical Drude oscillator model, in which electronic degrees of freedom are modeled by charged particles attached to the nuclei of their core atoms by harmonic springs. We describe the latest developments in Drude force field parametrization and application, primarily in the last 15 years. Emphasis is placed on the Drude-2013 polarizable force field for proteins, DNA, lipids, and carbohydrates. We discuss its parametrization protocol, development history, and recent simulations of biologically interesting systems, highlighting specific studies in which induced polarization plays a critical role in reproducing experimental observables and understanding physical behavior. As the Drude oscillator model is computationally tractable and available in a wide range of simulation packages, it is anticipated that use of these more complex physical models will lead to new and important discoveries of the physical forces driving a range of chemical and biological phenomena. PMID:26815602

  15. Adaptive control using neural networks and approximate models.

    PubMed

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  16. Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm

    DOE PAGES

    Yang, Shang-Te; Ling, Hao

    2013-01-01

    An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less

  17. Economics and computer science of a radio spectrum reallocation

    PubMed Central

    Leyton-Brown, Kevin; Segal, Ilya

    2017-01-01

    The recent “incentive auction” of the US Federal Communications Commission was the first auction to reallocate radio frequencies between two different kinds of uses: from broadcast television to wireless Internet access. The design challenge was not just to choose market rules to govern a fixed set of potential trades but also, to determine the broadcasters’ property rights, the goods to be exchanged, the quantities to be traded, the computational procedures, and even some of the performance objectives. An essential and unusual challenge was to make the auction simple enough for human participants while still ensuring that the computations would be tractable and capable of delivering nearly efficient outcomes. PMID:28652335

  18. Fast Decentralized Averaging via Multi-scale Gossip

    NASA Astrophysics Data System (ADS)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  19. Generation of dynamo magnetic fields in protoplanetary and other astrophysical accretion disks

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Levy, E. H.

    1988-01-01

    A computational method for treating the generation of dynamo magnetic fields in astrophysical disks is presented. The numerical difficulty of handling the boundary condition at infinity in the cylindrical disk geometry is overcome by embedding the disk in a spherical computational space and matching the solutions to analytically tractable spherical functions in the surrounding space. The lowest lying dynamo normal modes for a 'thick' astrophysical disk are calculated. The generated modes found are all oscillatory and spatially localized. Tha potential implications of the results for the properties of dynamo magnetic fields in real astrophysical disks are discussed.

  20. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  1. Extending the Stabilized Supralinear Network model for binocular image processing.

    PubMed

    Selby, Ben; Tripp, Bryan

    2017-06-01

    The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability. First, whereas the SSN receives abstract orientation signals as input, we extended it to receive images (through a linear-nonlinear stage), and found that the extended version behaved similarly. Secondly, whereas the SSN had previously been studied in a monocular context, we found that it could also reproduce data on interocular transfer of surround suppression. Finally, we reformulated the SSN as a convolutional neural network, and found that it scaled well on parallel hardware. These results provide additional support for the plausibility of the SSN as a model of lateral interactions in V1, and suggest that the SSN is well suited as a component of complex vision models. Future work will use the SSN to explore relationships between local network interactions and sophisticated vision processes in large networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Linear Models for Systematics and Nuisances

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.

    2017-12-01

    The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.

  3. A mathematical model for simulating noise suppression of lined ejectors

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.

    1994-01-01

    A mathematical model containing the essential features embodied in the noise suppression of lined ejectors is presented. Although some simplification of the physics is necessary to render the model mathematically tractable, the current model is the most versatile and technologically advanced at the current time. A system of linearized equations and the boundary conditions governing the sound field are derived starting from the equations of fluid dynamics. A nonreflecting boundary condition is developed. In view of the complex nature of the equations, a parametric study requires the use of numerical techniques and modern computers. A finite element algorithm that solves the differential equations coupled with the boundary condition is then introduced. The numerical method results in a matrix equation with several hundred thousand degrees of freedom that is solved efficiently on a supercomputer. The model is validated by comparing results either with exact solutions or with approximate solutions from other works. In each case, excellent correlations are obtained. The usefulness of the model as an optimization tool and the importance of variable impedance liners as a mechanism for achieving broadband suppression within a lined ejector are demonstrated.

  4. Finding Bounded Rational Equilibria. Part 1; Iterative Focusing

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights from the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  5. A meteorologically driven maize stress indicator model

    NASA Technical Reports Server (NTRS)

    Taylor, T. W.; Ravet, F. W. (Principal Investigator)

    1981-01-01

    A maize soil moisture and temperature stress model is described which was developed to serve as a meteorological data filter to alert commodity analysts to potential stress conditions in the major maize-producing areas of the world. The model also identifies optimum climatic conditions and planting/harvest problems associated with poor tractability.

  6. Brachypodium as a model for the grasses: today and the future

    USDA-ARS?s Scientific Manuscript database

    Over the past several years, Brachypodium distachyon (Brachypodium) has emerged as a tractable model system to study biological questions relevant to the grasses. To place its relevance in the larger context of plant biology, we outline here the expanding adoption of Brachypodium as a model grass an...

  7. A Computational Model Predicting Disruption of Blood Vessel Development

    PubMed Central

    Kleinstreuer, Nicole; Dix, David; Rountree, Michael; Baker, Nancy; Sipes, Nisha; Reif, David; Spencer, Richard; Knudsen, Thomas

    2013-01-01

    Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a variety of biological pathways linked to endothelial cell (EC) behavior, extracellular matrix (ECM) remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/) modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS) dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a morphogenetic series of events and for the first time demonstrate the applicability of these models for predictive toxicology. PMID:23592958

  8. Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs

    NASA Astrophysics Data System (ADS)

    Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes

    We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.

  9. Knowledge-Based Environmental Context Modeling

    NASA Astrophysics Data System (ADS)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient use of our renewable natural resources. [1] C2M2L (Component, Context, and Manufacturing Model Library) Final Report, https://doi.org/10.13140/RG.2.1.4956.3604

  10. Suppression of Soot Formation and Shapes of Laminar Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Xu, F.; Dai, Z.; Faeth, G. M.

    2001-01-01

    Laminar nonpremixed (diffusion) flames are of interest because they provide model flame systems that are far more tractable for analysis and experiments than practical turbulent flames. In addition, many properties of laminar diffusion flames are directly relevant to turbulent diffusion flames using laminar flamelet concepts. Finally, laminar diffusion flame shapes have been of interest since the classical study of Burke and Schumann because they involve a simple nonintrusive measurement that is convenient for evaluating flame shape predictions. Motivated by these observations, the shapes of round hydrocarbon-fueled laminar jet diffusion flames were considered, emphasizing conditions where effects of buoyancy are small because most practical flames are not buoyant. Earlier studies of shapes of hydrocarbon-fueled nonbuoyant laminar jet diffusion flames considered combustion in still air and have shown that flames at the laminar smoke point are roughly twice as long as corresponding soot-free (blue) flames and have developed simple ways to estimate their shapes. Corresponding studies of hydrocarbon-fueled weakly-buoyant laminar jet diffusion flames in coflowing air have also been reported. These studies were limited to soot-containing flames at laminar smoke point conditions and also developed simple ways to estimate their shapes but the behavior of corresponding soot-free flames has not been addressed. This is unfortunate because ways of selecting flame flow properties to reduce soot concentrations are of great interest; in addition, soot-free flames are fundamentally important because they are much more computationally tractable than corresponding soot-containing flames. Thus, the objectives of the present investigation were to observe the shapes of weakly-buoyant laminar jet diffusion flames at both soot-free and smoke point conditions and to use the results to evaluate simplified flame shape models. The present discussion is brief.

  11. Active Subspaces for Wind Plant Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N; Quick, Julian; Dykes, Katherine L

    Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less

  12. Modeling Terminal Velocity

    ERIC Educational Resources Information Center

    Brand, Neal; Quintanilla, John A.

    2013-01-01

    Using a simultaneously falling softball as a stopwatch, the terminal velocity of a whiffle ball can be obtained to surprisingly high accuracy with only common household equipment. This classroom activity engages students in an apparently daunting task that nevertheless is tractable, using a simple model and mathematical techniques at their…

  13. Potentials of Mean Force With Ab Initio Mixed Hamiltonian Models of Solvation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Michel; Schenter, Gregory K.; Garrett, Bruce C.

    2003-08-01

    We give an account of a computationally tractable and efficient procedure for the calculation of potentials of mean force using mixed Hamiltonian models of electronic structure where quantum subsystems are described with computationally intensive ab initio wavefunctions. The mixed Hamiltonian is mapped into an all-classical Hamiltonian that is amenable to a thermodynamic perturbation treatment for the calculation of free energies. A small number of statistically uncorrelated (solute-solvent) configurations are selected from the Monte Carlo random walk generated with the all-classical Hamiltonian approximation. Those are used in the averaging of the free energy using the mixed quantum/classical Hamiltonian. The methodology ismore » illustrated for the micro-solvated SN2 substitution reaction of methyl chloride by hydroxide. We also compare the potential of mean force calculated with the above protocol with an approximate formalism, one in which the potential of mean force calculated with the all-classical Hamiltonian is simply added to the energy of the isolated (non-solvated) solute along the reaction path. Interestingly the latter approach is found to be in semi-quantitative agreement with the full mixed Hamiltonian approximation.« less

  14. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.

  15. Fast-SNP: a fast matrix pre-processing algorithm for efficient loopless flux optimization of metabolic models

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155

  16. Molecular dynamics simulations on the inhibition of cyclin-dependent kinases 2 and 5 in the presence of activators.

    PubMed

    Zhang, Bing; Tan, Vincent B C; Lim, Kian Meng; Tay, Tong Earn

    2006-06-01

    Interests in CDK2 and CDK5 have stemmed mainly from their association with cancer and neuronal migration or differentiation related diseases and the need to design selective inhibitors for these kinases. Molecular dynamics (MD) simulations have not only become a viable approach to drug design because of advances in computer technology but are increasingly an integral part of drug discovery processes. It is common in MD simulations of inhibitor/CDK complexes to exclude the activator of the CDKs in the structural models to keep computational time tractable. In this paper, we present simulation results of CDK2 and CDK5 with roscovitine using models with and without their activators (cyclinA and p25). While p25 was found to induce slight changes in CDK5, the calculations support that cyclinA leads to significant conformational changes near the active site of CDK2. This suggests that detailed and structure-based inhibitor design targeted at these CDKs should employ activator-included models of the kinases. Comparisons between P/CDK2/cyclinA/roscovitine and CDK5/p25/roscovitine complexes reveal differences in the conformations of the glutamine around the active sites, which may be exploited to find highly selective inhibitors with respect to CDK2 and CDK5.

  17. Learning for Semantic Parsing with Kernels under Various Forms of Supervision

    DTIC Science & Technology

    2007-08-01

    natural language sentences to their formal executable meaning representations. This is a challenging problem and is critical for developing computing...sentences are semantically tractable. This indi- cates that Geoquery is more challenging domain for semantic parsing than ATIS. In the past, there have been a...Combining parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -99), pp. 187–194

  18. Analytical Modelling of the Spread of Disease in Confined and Crowded Spaces

    NASA Astrophysics Data System (ADS)

    Goscé, Lara; Barton, David A. W.; Johansson, Anders

    2014-05-01

    Since 1927 and until recently, most models describing the spread of disease have been of compartmental type, based on the assumption that populations are homogeneous and well-mixed. Recent models have utilised agent-based models and complex networks to explicitly study heterogeneous interaction patterns, but this leads to an increasing computational complexity. Compartmental models are appealing because of their simplicity, but their parameters, especially the transmission rate, are complex and depend on a number of factors, which makes it hard to predict how a change of a single environmental, demographic, or epidemiological factor will affect the population. Therefore, in this contribution we propose a middle ground, utilising crowd-behaviour research to improve compartmental models in crowded situations. We show how both the rate of infection as well as the walking speed depend on the local crowd density around an infected individual. The combined effect is that the rate of infection at a population scale has an analytically tractable non-linear dependency on crowd density. We model the spread of a hypothetical disease in a corridor and compare our new model with a typical compartmental model, which highlights the regime in which current models may not produce credible results.

  19. The potential for fast van der Waals computations for layered materials using a Lifshitz model

    NASA Astrophysics Data System (ADS)

    Zhou, Yao; Pellouchoud, Lenson A.; Reed, Evan J.

    2017-06-01

    Computation of the van der Waals (vdW) interactions plays a crucial role in the study of layered materials. The adiabatic-connection fluctuation-dissipation theorem within random phase approximation (ACFDT-RPA) has been empirically reported to be the most accurate of commonly used methods, but it is limited to small systems due to its computational complexity. Without a computationally tractable vdW correction, fictitious strains are often introduced in the study of multilayer heterostructures, which, we find, can change the vdW binding energy by as much as 15%. In this work, we employed for the first time a defined Lifshitz model to provide the vdW potentials for a spectrum of layered materials orders of magnitude faster than the ACFDT-RPA for representative layered material structures. We find that a suitably defined Lifshitz model gives the correlation component of the binding energy to within 8-20% of the ACFDT-RPA calculations for a variety of layered heterostructures. Using this fast Lifshitz model, we studied the vdW binding properties of 210 three-layered heterostructures. Our results demonstrate that the three-body vdW effects are generally small (10% of the binding energy) in layered materials for most cases, and that non-negligible second-nearest neighbor layer interaction and three-body effects are observed for only those cases in which the middle layer is atomically thin (e.g. BN or graphene). We find that there is potential for particular combinations of stacked layers to exhibit repulsive three-body van der Waals effects, although these effects are likely to be much smaller than two-body effects.

  20. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  1. Force and Stress along Simulated Dissociation Pathways of Cucurbituril-Guest Systems.

    PubMed

    Velez-Vega, Camilo; Gilson, Michael K

    2012-03-13

    The field of host-guest chemistry provides computationally tractable yet informative model systems for biomolecular recognition. We applied molecular dynamics simulations to study the forces and mechanical stresses associated with forced dissociation of aqueous cucurbituril-guest complexes with high binding affinities. First, the unbinding transitions were modeled with constant velocity pulling (steered dynamics) and a soft spring constant, to model atomic force microscopy (AFM) experiments. The computed length-force profiles yield rupture forces in good agreement with available measurements. We also used steered dynamics with high spring constants to generate paths characterized by a tight control over the specified pulling distance; these paths were then equilibrated via umbrella sampling simulations and used to compute time-averaged mechanical stresses along the dissociation pathways. The stress calculations proved to be informative regarding the key interactions determining the length-force profiles and rupture forces. In particular, the unbinding transition of one complex is found to be a stepwise process, which is initially dominated by electrostatic interactions between the guest's ammoniums and the host's carbonyl groups, and subsequently limited by the extraction of the guest's bulky bicyclooctane moiety; the latter step requires some bond stretching at the cucurbituril's extraction portal. Conversely, the dissociation of a second complex with a more slender guest is mainly driven by successive electrostatic interactions between the different guest's ammoniums and the host's carbonyl groups. The calculations also provide information on the origins of thermodynamic irreversibilities in these forced dissociation processes.

  2. The genetic basis of alcoholism: multiple phenotypes, many genes, complex networks.

    PubMed

    Morozova, Tatiana V; Goldman, David; Mackay, Trudy F C; Anholt, Robert R H

    2012-02-20

    Alcoholism is a significant public health problem. A picture of the genetic architecture underlying alcohol-related phenotypes is emerging from genome-wide association studies and work on genetically tractable model organisms.

  3. Rheological Models in the Time-Domain Modeling of Seismic Motion

    NASA Astrophysics Data System (ADS)

    Moczo, P.; Kristek, J.

    2004-12-01

    The time-domain stress-strain relation in a viscoelastic medium has a form of the convolutory integral which is numerically intractable. This was the reason for the oversimplified models of attenuation in the time-domain seismic wave propagation and earthquake motion modeling. In their pioneering work, Day and Minster (1984) showed the way how to convert the integral into numerically tractable differential form in the case of a general viscoelastic modulus. In response to the work by Day and Minster, Emmerich and Korn (1987) suggested using the rheology of their generalized Maxwell body (GMB) while Carcione et al. (1988) suggested using the generalized Zener body (GZB). The viscoelastic moduli of both rheological models have a form of the rational function and thus the differential form of the stress-strain relation is rather easy to obtain. After the papers by Emmerich and Korn and Carcione et al. numerical modelers decided either for the GMB or GZB rheology and developed 'non-communicating' algorithms. In the many following papers the authors using the GMB never commented the GZB rheology and the corresponding algorithms, and the authors using the GZB never related their methods to the GMB rheology and algorithms. We analyze and compare both rheologies and the corresponding incorporations of the realistic attenuation into the time-domain computations. We then focus on the most recent staggered-grid finite-difference modeling, mainly on accounting for the material heterogeneity in the viscoelastic media, and the computational efficiency of the finite-difference algorithms.

  4. An application of nonlinear programming to the design of regulators of a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a nonlinear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer. One concerns helicopter longitudinal dynamics and the other the flight dynamics of an aerodynamically unstable aircraft.

  5. Gradient Models in Molecular Biophysics: Progress, Challenges, Opportunities

    PubMed Central

    Bardhan, Jaydeep P.

    2014-01-01

    In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g. molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding features such as nonlocal dielectric response, and nonlinearities resulting from dielectric saturation. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost forty years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The paper concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics. PMID:25505358

  6. Gradient Models in Molecular Biophysics: Progress, Challenges, Opportunities.

    PubMed

    Bardhan, Jaydeep P

    2013-12-01

    In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g. molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding features such as nonlocal dielectric response, and nonlinearities resulting from dielectric saturation. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost forty years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The paper concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics.

  7. Gradient models in molecular biophysics: progress, challenges, opportunities

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.

    2013-12-01

    In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g., molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding nonlocal dielectric response. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain, and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost 40 years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The review concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics.

  8. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  9. Investigation of Coal-biomass Catalytic Gasification using Experiments, Reaction Kinetics and Computational Fluid Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaglia, Francine; Agblevor, Foster; Klein, Michael

    A collaborative effort involving experiments, kinetic modeling, and computational fluid dynamics (CFD) was used to understand co-gasification of coal-biomass mixtures. The overall goal of the work was to determine the key reactive properties for coal-biomass mixed fuels. Sub-bituminous coal was mixed with biomass feedstocks to determine the fluidization and gasification characteristics of hybrid poplar wood, switchgrass and corn stover. It was found that corn stover and poplar wood were the best feedstocks to use with coal. The novel approach of this project was the use of a red mud catalyst to improve gasification and lower gasification temperatures. An important resultsmore » was the reduction of agglomeration of the biomass using the catalyst. An outcome of this work was the characterization of the chemical kinetics and reaction mechanisms of the co-gasification fuels, and the development of a set of models that can be integrated into other modeling environments. The multiphase flow code, MFIX, was used to simulate and predict the hydrodynamics and co-gasification, and results were validated with the experiments. The reaction kinetics modeling was used to develop a smaller set of reactions for tractable CFD calculations that represented the experiments. Finally, an efficient tool was developed, MCHARS, and coupled with MFIX to efficiently simulate the complex reaction kinetics.« less

  10. Finding Bounded Rational Equilibria. Part 2; Alternative Lagrangians and Uncountable Move Spaces

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights &om the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  11. Automated Monitoring and Analysis of Social Behavior in Drosophila

    PubMed Central

    Dankert, Heiko; Wang, Liming; Hoopfer, Eric D.; Anderson, David J.; Perona, Pietro

    2009-01-01

    We introduce a method based on machine vision for automatically measuring aggression and courtship in Drosophila melanogaster. The genetic and neural circuit bases of these innate social behaviors are poorly understood. High-throughput behavioral screening in this genetically tractable model organism is a potentially powerful approach, but it is currently very laborious. Our system monitors interacting pairs of flies, and computes their location, orientation and wing posture. These features are used for detecting behaviors exhibited during aggression and courtship. Among these, wing threat, lunging and tussling are specific to aggression; circling, wing extension (courtship “song”) and copulation are specific to courtship; locomotion and chasing are common to both. Ethograms may be constructed automatically from these measurements, saving considerable time and effort. This technology should enable large-scale screens for genes and neural circuits controlling courtship and aggression. PMID:19270697

  12. [Research on the Application of Fuzzy Logic to Systems Analysis and Control

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Research conducted with the support of NASA Grant NCC2-275 has been focused in the main on the development of fuzzy logic and soft computing methodologies and their applications to systems analysis and control. with emphasis 011 problem areas which are of relevance to NASA's missions. One of the principal results of our research has been the development of a new methodology called Computing with Words (CW). Basically, in CW words drawn from a natural language are employed in place of numbers for computing and reasoning. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW.

  13. Hydrodynamic theory of diffusion in two-temperature multicomponent plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramshaw, J.D.; Chang, C.H.

    Detailed numerical simulations of multicomponent plasmas require tractable expressions for species diffusion fluxes, which must be consistent with the given plasma current density J{sub q} to preserve local charge neutrality. The common situation in which J{sub q} = 0 is referred to as ambipolar diffusion. The use of formal kinetic theory in this context leads to results of formidable complexity. We derive simple tractable approximations for the diffusion fluxes in two-temperature multicomponent plasmas by means of a generalization of the hydrodynamical approach used by Maxwell, Stefan, Furry, and Williams. The resulting diffusion fluxes obey generalized Stefan-Maxwell equations that contain drivingmore » forces corresponding to ordinary, forced, pressure, and thermal diffusion. The ordinary diffusion fluxes are driven by gradients in pressure fractions rather than mole fractions. Simplifications due to the small electron mass are systematically exploited and lead to a general expression for the ambipolar electric field in the limit of infinite electrical conductivity. We present a self-consistent effective binary diffusion approximation for the diffusion fluxes. This approximation is well suited to numerical implementation and is currently in use in our LAVA computer code for simulating multicomponent thermal plasmas. Applications to date include a successful simulation of demixing effects in an argon-helium plasma jet, for which selected computational results are presented. Generalizations of the diffusion theory to finite electrical conductivity and nonzero magnetic field are currently in progress.« less

  14. A meteorologically driven grain sorghum stress indicator model

    NASA Technical Reports Server (NTRS)

    Taylor, T. W.; Ravet, F. W. (Principal Investigator)

    1981-01-01

    A grain sorghum soil moisture and temperature stress model is described. It was developed to serve as a meteorological data filter to alert commodity analysts to potential stress conditions and crop phenology in selected grain sorghum production areas. The model also identifies optimum conditions on a daily basis and planting/harvest problems associated with poor tractability.

  15. Examining Trust, Forgiveness and Regret as Computational Concepts

    NASA Astrophysics Data System (ADS)

    Marsh, Stephen; Briggs, Pamela

    The study of trust has advanced tremendously in recent years, to the extent that the goal of a more unified formalisation of the concept is becoming feasible. To that end, we have begun to examine the closely related concepts of regret and forgiveness and their relationship to trust and its siblings. The resultant formalisation allows computational tractability in, for instance, artificial agents. Moreover, regret and forgiveness, when allied to trust, are very powerful tools in the Ambient Intelligence (AmI) security area, especially where Human Computer Interaction and concrete human understanding are key. This paper introduces the concepts of regret and forgiveness, exploring them from social psychological as well as a computational viewpoint, and presents an extension to Marsh's original trust formalisation that takes them into account. It discusses and explores work in the AmI environment, and further potential applications.

  16. Module discovery by exhaustive search for densely connected, co-expressed regions in biomolecular interaction networks.

    PubMed

    Colak, Recep; Moser, Flavia; Chu, Jeffrey Shih-Chieh; Schönhuth, Alexander; Chen, Nansheng; Ester, Martin

    2010-10-25

    Computational prediction of functionally related groups of genes (functional modules) from large-scale data is an important issue in computational biology. Gene expression experiments and interaction networks are well studied large-scale data sources, available for many not yet exhaustively annotated organisms. It has been well established, when analyzing these two data sources jointly, modules are often reflected by highly interconnected (dense) regions in the interaction networks whose participating genes are co-expressed. However, the tractability of the problem had remained unclear and methods by which to exhaustively search for such constellations had not been presented. We provide an algorithmic framework, referred to as Densely Connected Biclustering (DECOB), by which the aforementioned search problem becomes tractable. To benchmark the predictive power inherent to the approach, we computed all co-expressed, dense regions in physical protein and genetic interaction networks from human and yeast. An automatized filtering procedure reduces our output which results in smaller collections of modules, comparable to state-of-the-art approaches. Our results performed favorably in a fair benchmarking competition which adheres to standard criteria. We demonstrate the usefulness of an exhaustive module search, by using the unreduced output to more quickly perform GO term related function prediction tasks. We point out the advantages of our exhaustive output by predicting functional relationships using two examples. We demonstrate that the computation of all densely connected and co-expressed regions in interaction networks is an approach to module discovery of considerable value. Beyond confirming the well settled hypothesis that such co-expressed, densely connected interaction network regions reflect functional modules, we open up novel computational ways to comprehensively analyze the modular organization of an organism based on prevalent and largely available large-scale datasets. Software and data sets are available at http://www.sfu.ca/~ester/software/DECOB.zip.

  17. The genetic basis of alcoholism: multiple phenotypes, many genes, complex networks

    PubMed Central

    2012-01-01

    Alcoholism is a significant public health problem. A picture of the genetic architecture underlying alcohol-related phenotypes is emerging from genome-wide association studies and work on genetically tractable model organisms. PMID:22348705

  18. Simulations of the erythrocyte cytoskeleton at large deformation. II. Micropipette aspiration.

    PubMed Central

    Discher, D E; Boal, D H; Boey, S K

    1998-01-01

    Coarse-grained molecular models of the erythrocyte membrane's spectrin cytoskeleton are presented in Monte Carlo simulations of whole cells in micropipette aspiration. The nonlinear chain elasticity and sterics revealed in more microscopic cytoskeleton models (developed in a companion paper; Boey et al., 1998. Biophys. J. 75:1573-1583) are faithfully represented here by two- and three-body effective potentials. The number of degrees of freedom of the system are thereby reduced to a range that is computationally tractable. Three effective models for the triangulated cytoskeleton are developed: two models in which the cytoskeleton is stress-free and does or does not have internal attractive interactions, and a third model in which the cytoskeleton is prestressed in situ. These are employed in direct, finite-temperature simulations of erythrocyte deformation in a micropipette. All three models show reasonable agreement with aspiration measurements made on flaccid human erythrocytes, but the prestressed model alone yields optimal agreement with fluorescence imaging experiments. Ensemble-averaging of nonaxisymmetrical, deformed structures exhibiting anisotropic strain are thus shown to provide an answer to the basic question of how a triangulated mesh such as that of the red cell cytoskeleton deforms in experiment. PMID:9726959

  19. Simulations of the erythrocyte cytoskeleton at large deformation. II. Micropipette aspiration.

    PubMed

    Discher, D E; Boal, D H; Boey, S K

    1998-09-01

    Coarse-grained molecular models of the erythrocyte membrane's spectrin cytoskeleton are presented in Monte Carlo simulations of whole cells in micropipette aspiration. The nonlinear chain elasticity and sterics revealed in more microscopic cytoskeleton models (developed in a companion paper; Boey et al., 1998. Biophys. J. 75:1573-1583) are faithfully represented here by two- and three-body effective potentials. The number of degrees of freedom of the system are thereby reduced to a range that is computationally tractable. Three effective models for the triangulated cytoskeleton are developed: two models in which the cytoskeleton is stress-free and does or does not have internal attractive interactions, and a third model in which the cytoskeleton is prestressed in situ. These are employed in direct, finite-temperature simulations of erythrocyte deformation in a micropipette. All three models show reasonable agreement with aspiration measurements made on flaccid human erythrocytes, but the prestressed model alone yields optimal agreement with fluorescence imaging experiments. Ensemble-averaging of nonaxisymmetrical, deformed structures exhibiting anisotropic strain are thus shown to provide an answer to the basic question of how a triangulated mesh such as that of the red cell cytoskeleton deforms in experiment.

  20. Heuristics as Bayesian inference under extreme priors.

    PubMed

    Parpart, Paula; Jones, Matt; Love, Bradley C

    2018-05-01

    Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Computational techniques to enable visualizing shapes of objects of extra spatial dimensions

    NASA Astrophysics Data System (ADS)

    Black, Don Vaughn, II

    Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.

  2. Wrinkle-free design of thin membrane structures using stress-based topology optimization

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-05-01

    Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.

  3. Effects of van der Waals Force and Thermal Stresses on Pull-in Instability of Clamped Rectangular Microplates

    PubMed Central

    Batra, Romesh C.; Porfiri, Maurizio; Spinello, Davide

    2008-01-01

    We study the influence of von Kármán nonlinearity, van der Waals force, and thermal stresses on pull-in instability and small vibrations of electrostatically actuated microplates. We use the Galerkin method to develop a tractable reduced-order model for electrostatically actuated clamped rectangular microplates in the presence of van der Waals forces and thermal stresses. More specifically, we reduce the governing two-dimensional nonlinear transient boundary-value problem to a single nonlinear ordinary differential equation. For the static problem, the pull-in voltage and the pull-in displacement are determined by solving a pair of nonlinear algebraic equations. The fundamental vibration frequency corresponding to a deflected configuration of the microplate is determined by solving a linear algebraic equation. The proposed reduced-order model allows for accurately estimating the combined effects of van der Waals force and thermal stresses on the pull-in voltage and the pull-in deflection profile with an extremely limited computational effort. PMID:27879752

  4. Effects of van der Waals Force and Thermal Stresses on Pull-in Instability of Clamped Rectangular Microplates.

    PubMed

    Batra, Romesh C; Porfiri, Maurizio; Spinello, Davide

    2008-02-15

    We study the influence of von Karman nonlinearity, van der Waals force, and a athermal stresses on pull-in instability and small vibrations of electrostatically actuated mi-croplates. We use the Galerkin method to develop a tractable reduced-order model for elec-trostatically actuated clamped rectangular microplates in the presence of van der Waals forcesand thermal stresses. More specifically, we reduce the governing two-dimensional nonlineartransient boundary-value problem to a single nonlinear ordinary differential equation. For thestatic problem, the pull-in voltage and the pull-in displacement are determined by solving apair of nonlinear algebraic equations. The fundamental vibration frequency corresponding toa deflected configuration of the microplate is determined by solving a linear algebraic equa-tion. The proposed reduced-order model allows for accurately estimating the combined effectsof van der Waals force and thermal stresses on the pull-in voltage and the pull-in deflectionprofile with an extremely limited computational effort.

  5. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  6. Universal distribution of component frequencies in biological and technological systems

    PubMed Central

    Pang, Tin Yau; Maslov, Sergei

    2013-01-01

    Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195

  7. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  8. PPDB - A tool for investigation of plants physiology based on gene ontology.

    PubMed

    Sharma, Ajay Shiv; Gupta, Hari Om; Prasad, Rajendra

    2014-09-02

    Representing the way forward, from functional genomics and its ontology to functional understanding and physiological model, in a computationally tractable fashion is one of the ongoing challenges faced by computational biology. To tackle the standpoint, we herein feature the applications of contemporary database management to the development of PPDB, a searching and browsing tool for the Plants Physiology Database that is based upon the mining of a large amount of gene ontology data currently available. The working principles and search options associated with the PPDB are publicly available and freely accessible on-line ( http://www.iitr.ernet.in/ajayshiv/ ) through a user friendly environment generated by means of Drupal-6.24. By knowing that genes are expressed in temporally and spatially characteristic patterns and that their functionally distinct products often reside in specific cellular compartments and may be part of one or more multi-component complexes, this sort of work is intended to be relevant for investigating the functional relationships of gene products at a system level and, thus, helps us approach to the full physiology.

  9. PPDB: A Tool for Investigation of Plants Physiology Based on Gene Ontology.

    PubMed

    Sharma, Ajay Shiv; Gupta, Hari Om; Prasad, Rajendra

    2015-09-01

    Representing the way forward, from functional genomics and its ontology to functional understanding and physiological model, in a computationally tractable fashion is one of the ongoing challenges faced by computational biology. To tackle the standpoint, we herein feature the applications of contemporary database management to the development of PPDB, a searching and browsing tool for the Plants Physiology Database that is based upon the mining of a large amount of gene ontology data currently available. The working principles and search options associated with the PPDB are publicly available and freely accessible online ( http://www.iitr.ac.in/ajayshiv/ ) through a user-friendly environment generated by means of Drupal-6.24. By knowing that genes are expressed in temporally and spatially characteristic patterns and that their functionally distinct products often reside in specific cellular compartments and may be part of one or more multicomponent complexes, this sort of work is intended to be relevant for investigating the functional relationships of gene products at a system level and, thus, helps us approach to the full physiology.

  10. Analytically tractable model for community ecology with many species

    NASA Astrophysics Data System (ADS)

    Dickens, Benjamin; Fisher, Charles K.; Mehta, Pankaj

    2016-08-01

    A fundamental problem in community ecology is understanding how ecological processes such as selection, drift, and immigration give rise to observed patterns in species composition and diversity. Here, we analyze a recently introduced, analytically tractable, presence-absence (PA) model for community assembly, and we use it to ask how ecological traits such as the strength of competition, the amount of diversity, and demographic and environmental stochasticity affect species composition in a community. In the PA model, species are treated as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Building upon previous work, we show that, despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent studies of large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special ("critical") point corresponding to Hubbell's neutral theory of biodiversity. These results suggest that the concepts of ecological "phases" and phase diagrams can provide a powerful framework for thinking about community ecology, and that the PA model captures the essential ecological dynamics of community assembly.

  11. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  12. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  13. Effects of Disturbance on Populations of Marine Mammals

    DTIC Science & Technology

    2014-09-30

    relation between foraging success of mothers and pup production (reproductive rate). Second, we we used mark-recapture models to quantify the...responses to disturbance are not necessarily surrogate measures of population-level responses is widely understood. However, without tractable

  14. Sharp Truncation of an Electric Field: An Idealized Model That Warrants Caution

    ERIC Educational Resources Information Center

    Tu, Hong; Zhu, Jiongming

    2016-01-01

    In physics, idealized models are often used to simplify complex situations. The motivation of the idealization is to make the real complex system tractable by adopting certain simplifications. In this treatment some unnecessary, negligible aspects are stripped away (so-called Aristotelian idealization), or some deliberate distortions are involved…

  15. Calibration of aero-structural reduced order models using full-field experimental measurements

    NASA Astrophysics Data System (ADS)

    Perez, R.; Bartram, G.; Beberniss, T.; Wiebe, R.; Spottswood, S. M.

    2017-03-01

    The structural response of hypersonic aircraft panels is a multi-disciplinary problem, where the nonlinear structural dynamics, aerodynamics, and heat transfer models are coupled. A clear understanding of the impact of high-speed flow effects on the structural response, and the potential influence of the structure on the local environment, is needed in order to prevent the design of overly-conservative structures, a common problem in past hypersonic programs. The current work investigates these challenges from a structures perspective. To this end, the first part of this investigation looks at the modeling of the response of a rectangular panel to an external heating source (thermo-structural coupling) where the temperature effect on the structure is obtained from forward looking infrared (FLIR) measurements and the displacement via 3D-digital image correlation (DIC). The second part of the study uses data from a previous series of wind-tunnel experiments, performed to investigate the response of a compliant panel to the effects of high-speed flow, to train a pressure surrogate model. In this case, the panel aero-loading is obtained from fast-response pressure sensitive paint (PSP) measurements, both directly and from the pressure surrogate model. The result of this investigation is the use of full-field experimental measurements to update the structural model and train a computational efficient model of the loading environment. The use of reduced order models, informed by these full-field physical measurements, is a significant step toward the development of accurate simulation models of complex structures that are computationally tractable.

  16. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    PubMed Central

    2010-01-01

    Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to develop their own GPU implementations, and encourage others to implement their modeling methods on the GPU and to make that code available to the wider community. PMID:20696053

  17. Tackling the conformational sampling of larger flexible compounds and macrocycles in pharmacology and drug discovery.

    PubMed

    Chen, I-Jen; Foloppe, Nicolas

    2013-12-15

    Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

  19. Biobeam—Multiplexed wave-optical simulations of light-sheet microscopy

    PubMed Central

    Weigert, Martin; Bundschuh, Sebastian T.

    2018-01-01

    Sample-induced image-degradation remains an intricate wave-optical problem in light-sheet microscopy. Here we present biobeam, an open-source software package that enables simulation of operational light-sheet microscopes by combining data from 105–106 multiplexed and GPU-accelerated point-spread-function calculations. The wave-optical nature of these simulations leads to the faithful reproduction of spatially varying aberrations, diffraction artifacts, geometric image distortions, adaptive optics, and emergent wave-optical phenomena, and renders image-formation in light-sheet microscopy computationally tractable. PMID:29652879

  20. Modeling and control of flexible space platforms with articulated payloads

    NASA Technical Reports Server (NTRS)

    Graves, Philip C.; Joshi, Suresh M.

    1989-01-01

    The first steps in developing a methodology for spacecraft control-structure interaction (CSI) optimization are identification and classification of anticipated missions, and the development of tractable mathematical models in each mission class. A mathematical model of a generic large flexible space platform (LFSP) with multiple independently pointed rigid payloads is considered. The objective is not to develop a general purpose numerical simulation, but rather to develop an analytically tractable mathematical model of such composite systems. The equations of motion for a single payload case are derived, and are linearized about zero steady-state. The resulting model is then extended to include multiple rigid payloads, yielding the desired analytical form. The mathematical models developed clearly show the internal inertial/elastic couplings, and are therefore suitable for analytical and numerical studies. A simple decentralized control law is proposed for fine pointing the payloads and LFSP attitude control, and simulation results are presented for an example problem. The decentralized controller is shown to be adequate for the example problem chosen, but does not, in general, guarantee stability. A centralized dissipative controller is then proposed, requiring a symmetric form of the composite system equations. Such a controller guarantees robust closed loop stability despite unmodeled elastic dynamics and parameter uncertainties.

  1. Hierarchical Model for the Analysis of Scattering Data of Complex Materials

    DOE PAGES

    Oyedele, Akinola; Mcnutt, Nicholas W.; Rios, Orlando; ...

    2016-05-16

    Interpreting the results of scattering data for complex materials with a hierarchical structure in which at least one phase is amorphous presents a significant challenge. Often the interpretation relies on the use of large-scale molecular dynamics (MD) simulations, in which a structure is hypothesized and from which a radial distribution function (RDF) can be extracted and directly compared against an experimental RDF. This computationally intensive approach presents a bottleneck in the efficient characterization of the atomic structure of new materials. Here, we propose and demonstrate an approach for a hierarchical decomposition of the RDF in which MD simulations are replacedmore » by a combination of tractable models and theory at the atomic scale and the mesoscale, which when combined yield the RDF. We apply the procedure to a carbon composite, in which graphitic nanocrystallites are distributed in an amorphous domain. We compare the model with the RDF from both MD simulation and neutron scattering data. Ultimately, this procedure is applicable for understanding the fundamental processing-structure-property relationships in complex magnetic materials.« less

  2. An "intelligent" approach based on side-by-side cascade-correlation neural networks for estimating thermophysical properties from photothermal responses

    NASA Astrophysics Data System (ADS)

    Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc

    2015-01-01

    In the present paper, an artificial-intelligence-based approach dealing with the estimation of thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when subjecting materials to a light flux. So, the main objective of the present work was to estimate simultaneously both the thermal diffusivity and conductivity of materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side feedforward neural networks trained with the cascade-correlation algorithm. In addition, computation time was a key point to consider. That is why the developed algorithms are computationally tractable.

  3. A stochastic multi-scale method for turbulent premixed combustion

    NASA Astrophysics Data System (ADS)

    Cha, Chong M.

    2002-11-01

    The stochastic chemistry algorithm of Bunker et al. and Gillespie is used to perform the chemical reactions in a transported probability density function (PDF) modeling approach of turbulent combustion. Recently, Kraft & Wagner have demonstrated a 100-fold gain in computational speed (for a 100 species mechanism) using the stochastic approach over the conventional, direct integration method of solving for the chemistry. Here, the stochastic chemistry algorithm is applied to develop a new transported PDF model of turbulent premixed combustion. The methodology relies on representing the relevant spatially dependent physical processes as queuing events. The canonical problem of a one-dimensional premixed flame is used for validation. For the laminar case, molecular diffusion is described by a random walk. For the turbulent case, one of two different material transport submodels can provide the necessary closure: Taylor dispersion or Kerstein's one-dimensional turbulence approach. The former exploits ``eddy diffusivity'' and hence would be much more computationally tractable for practical applications. Various validation studies are performed. Results from the Monte Carlo simulations compare well to asymptotic solutions of laminar premixed flames, both with and without high activation temperatures. The correct scaling of the turbulent burning velocity is predicted in both Damköhler's small- and large-scale turbulence limits. The effect of applying the eddy diffusivity concept in the various regimes is discussed.

  4. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  5. Modelling solid solutions with cluster expansion, special quasirandom structures, and thermodynamic approaches

    NASA Astrophysics Data System (ADS)

    Saltas, V.; Horlait, D.; Sgourou, E. N.; Vallianatos, F.; Chroneos, A.

    2017-12-01

    Modelling solid solutions is fundamental in understanding the properties of numerous materials which are important for a range of applications in various fields including nanoelectronics and energy materials such as fuel cells, nuclear materials, and batteries, as the systematic understanding throughout the composition range of solid solutions for a range of conditions can be challenging from an experimental viewpoint. The main motivation of this review is to contribute to the discussion in the community of the applicability of methods that constitute the investigation of solid solutions computationally tractable. This is important as computational modelling is required to calculate numerous defect properties and to act synergistically with experiment to understand these materials. This review will examine in detail two examples: silicon germanium alloys and MAX phase solid solutions. Silicon germanium alloys are technologically important in nanoelectronic devices and are also relevant considering the recent advances in ternary and quaternary groups IV and III-V semiconductor alloys. MAX phase solid solutions display a palette of ceramic and metallic properties and it is anticipated that via their tuning they can have applications ranging from nuclear to aerospace industries as well as being precursors for particular MXenes. In the final part, a brief summary assesses the limitations and possibilities of the methodologies discussed, whereas there is discussion on the future directions and examples of solid solution systems that should prove fruitful to consider.

  6. Verifying and Validating Simulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less

  7. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  8. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  9. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  10. Computational prediction of formulation strategies for beyond-rule-of-5 compounds.

    PubMed

    Bergström, Christel A S; Charman, William N; Porter, Christopher J H

    2016-06-01

    The physicochemical properties of some contemporary drug candidates are moving towards higher molecular weight, and coincidentally also higher lipophilicity in the quest for biological selectivity and specificity. These physicochemical properties move the compounds towards beyond rule-of-5 (B-r-o-5) chemical space and often result in lower water solubility. For such B-r-o-5 compounds non-traditional delivery strategies (i.e. those other than conventional tablet and capsule formulations) typically are required to achieve adequate exposure after oral administration. In this review, we present the current status of computational tools for prediction of intestinal drug absorption, models for prediction of the most suitable formulation strategies for B-r-o-5 compounds and models to obtain an enhanced understanding of the interplay between drug, formulation and physiological environment. In silico models are able to identify the likely molecular basis for low solubility in physiologically relevant fluids such as gastric and intestinal fluids. With this baseline information, a formulation scientist can, at an early stage, evaluate different orally administered, enabling formulation strategies. Recent computational models have emerged that predict glass-forming ability and crystallisation tendency and therefore the potential utility of amorphous solid dispersion formulations. Further, computational models of loading capacity in lipids, and therefore the potential for formulation as a lipid-based formulation, are now available. Whilst such tools are useful for rapid identification of suitable formulation strategies, they do not reveal drug localisation and molecular interaction patterns between drug and excipients. For the latter, Molecular Dynamics simulations provide an insight into the interplay between drug, formulation and intestinal fluid. These different computational approaches are reviewed. Additionally, we analyse the molecular requirements of different targets, since these can provide an early signal that enabling formulation strategies will be required. Based on the analysis we conclude that computational biopharmaceutical profiling can be used to identify where non-conventional gateways, such as prediction of 'formulate-ability' during lead optimisation and early development stages, are important and may ultimately increase the number of orally tractable contemporary targets. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.

    PubMed

    Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit

    2018-01-01

    The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.

  12. The speed-accuracy tradeoff: history, physiology, methodology, and behavior

    PubMed Central

    Heitz, Richard P.

    2014-01-01

    There are few behavioral effects as ubiquitous as the speed-accuracy tradeoff (SAT). From insects to rodents to primates, the tendency for decision speed to covary with decision accuracy seems an inescapable property of choice behavior. Recently, the SAT has received renewed interest, as neuroscience approaches begin to uncover its neural underpinnings and computational models are compelled to incorporate it as a necessary benchmark. The present work provides a comprehensive overview of SAT. First, I trace its history as a tractable behavioral phenomenon and the role it has played in shaping mathematical descriptions of the decision process. Second, I present a “users guide” of SAT methodology, including a critical review of common experimental manipulations and analysis techniques and a treatment of the typical behavioral patterns that emerge when SAT is manipulated directly. Finally, I review applications of this methodology in several domains. PMID:24966810

  13. Data Access, Interoperability and Sustainability: Key Challenges for the Evolution of Science Capabilities

    NASA Astrophysics Data System (ADS)

    Walton, A. L.

    2015-12-01

    In 2016, the National Science Foundation (NSF) will support a portfolio of activities and investments focused upon challenges in data access, interoperability, and sustainability. These topics are fundamental to science questions of increasing complexity that require multidisciplinary approaches and expertise. Progress has become tractable because of (and sometimes complicated by) unprecedented growth in data (both simulations and observations) and rapid advances in technology (such as instrumentation in all aspects of the discovery process, together with ubiquitous cyberinfrastructure to connect, compute, visualize, store, and discover). The goal is an evolution of capabilities for the research community based on these investments, scientific priorities, technology advances, and policies. Examples from multiple NSF directorates, including investments by the Advanced Cyberinfrastructure Division, are aimed at these challenges and can provide the geosciences research community with models and opportunities for participation. Implications for the future are highlighted, along with the importance of continued community engagement on key issues.

  14. Motion of a Distinguishable Impurity in the Bose Gas: Arrested Expansion Without a Lattice and Impurity Snaking

    NASA Astrophysics Data System (ADS)

    Robinson, Neil J.; Caux, Jean-Sébastien; Konik, Robert M.

    2016-04-01

    We consider the real-time dynamics of an initially localized distinguishable impurity injected into the ground state of the Lieb-Liniger model. Focusing on the case where integrability is preserved, we numerically compute the time evolution of the impurity density operator in regimes far from analytically tractable limits. We find that the injected impurity undergoes a stuttering motion as it moves and expands. For an initially stationary impurity, the interaction-driven formation of a quasibound state with a hole in the background gas leads to arrested expansion—a period of quasistationary behavior. When the impurity is injected with a finite center-of-mass momentum, the impurity moves through the background gas in a snaking manner, arising from a quantum Newton's cradlelike scenario where momentum is exchanged back and forth between the impurity and the background gas.

  15. Motion of a distinguishable Impurity in the Bose gas: Arrested expansion without a lattice and impurity snaking

    DOE PAGES

    Neil J. Robinson; Caux, Jean -Sebastien; Konik, Robert M.

    2016-04-07

    We consider the real-time dynamics of an initially localized distinguishable impurity injected into the ground state of the Lieb-Liniger model. Focusing on the case where integrability is preserved, we numerically compute the time evolution of the impurity density operator in regimes far from analytically tractable limits. We find that the injected impurity undergoes a stuttering motion as it moves and expands. For an initially stationary impurity, the interaction-driven formation of a quasibound state with a hole in the background gas leads to arrested expansion—a period of quasistationary behavior. In conclusion, when the impurity is injected with a finite center-of-mass momentum,more » the impurity moves through the background gas in a snaking manner, arising from a quantum Newton’s cradlelike scenario where momentum is exchanged back and forth between the impurity and the background gas.« less

  16. Nonlinear Model Predictive Control for Cooperative Control and Estimation

    NASA Astrophysics Data System (ADS)

    Ru, Pengkai

    Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.

  17. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    NASA Astrophysics Data System (ADS)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  18. Biological system interactions.

    PubMed Central

    Adomian, G; Adomian, G E; Bellman, R E

    1984-01-01

    Mathematical modeling of cellular population growth, interconnected subsystems of the body, blood flow, and numerous other complex biological systems problems involves nonlinearities and generally randomness as well. Such problems have been dealt with by mathematical methods often changing the actual model to make it tractable. The method presented in this paper (and referenced works) allows much more physically realistic solutions. PMID:6585837

  19. An analytically tractable model for community ecology with many species

    NASA Astrophysics Data System (ADS)

    Dickens, Benjamin; Fisher, Charles; Mehta, Pankaj; Pankaj Mehta Biophysics Theory Group Team

    A fundamental problem in community ecology is to understand how ecological processes such as selection, drift, and immigration yield observed patterns in species composition and diversity. Here, we present an analytically tractable, presence-absence (PA) model for community assembly and use it to ask how ecological traits such as the strength of competition, diversity in competition, and stochasticity affect species composition in a community. In our PA model, we treat species as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent work on large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special (``critical'') point corresponding to Hubbell's neutral theory of biodiversity. Our results suggest that the concepts of ``phases'' and phase diagrams can provide a powerful framework for thinking about community ecology and that the PA model captures the essential ecological dynamics of community assembly. Pm was supported by a Simons Investigator in the Mathematical Modeling of Living Systems and a Sloan Research Fellowship.

  20. Fluid-Structure Interaction and Structural Analyses using a Comprehensive Mitral Valve Model with 3D Chordal Structure

    PubMed Central

    Toma, Milan; Einstein, Daniel R.; Bloodworth, Charles H.; Cochran, Richard P.; Yoganathan, Ajit P.; Kunzelman, Karyn S.

    2016-01-01

    Over the years, three-dimensional models of the mitral valve have generally been organized around a simplified anatomy. Leaflets have been typically modeled as membranes, tethered to discrete chordae typically modeled as one-dimensional, non-linear cables. Yet, recent, high-resolution medical images have revealed that there is no clear boundary between the chordae and the leaflets. In fact, the mitral valve has been revealed to be more of a webbed structure whose architecture is continuous with the chordae and their extensions into the leaflets. Such detailed images can serve as the basis of anatomically accurate, subject-specific models, wherein the entire valve is modeled with solid elements that more faithfully represent the chordae, the leaflets, and the transition between the two. These models have the potential to enhance our understanding of mitral valve mechanics, and to re-examine the role of the mitral valve chordae, which heretofore have been considered to be “invisible” to the fluid and to be of secondary importance to the leaflets. However, these new models also require a rethinking of modeling assumptions. In this study, we examine the conventional practice of loading the leaflets only and not the chordae in order to study the structural response of the mitral valve apparatus. Specifically, we demonstrate that fully resolved 3D models of the mitral valve require a fluid-structure interaction analysis to correctly load the valve even in the case of quasi-static mechanics. While a fluid-structure interaction mode is still more computationally expensive than a structural-only model, we also show that advances in GPU computing have made such models tractable. PMID:27342229

  1. Fluid-structure interaction and structural analyses using a comprehensive mitral valve model with 3D chordal structure.

    PubMed

    Toma, Milan; Einstein, Daniel R; Bloodworth, Charles H; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2017-04-01

    Over the years, three-dimensional models of the mitral valve have generally been organized around a simplified anatomy. Leaflets have been typically modeled as membranes, tethered to discrete chordae typically modeled as one-dimensional, non-linear cables. Yet, recent, high-resolution medical images have revealed that there is no clear boundary between the chordae and the leaflets. In fact, the mitral valve has been revealed to be more of a webbed structure whose architecture is continuous with the chordae and their extensions into the leaflets. Such detailed images can serve as the basis of anatomically accurate, subject-specific models, wherein the entire valve is modeled with solid elements that more faithfully represent the chordae, the leaflets, and the transition between the two. These models have the potential to enhance our understanding of mitral valve mechanics and to re-examine the role of the mitral valve chordae, which heretofore have been considered to be 'invisible' to the fluid and to be of secondary importance to the leaflets. However, these new models also require a rethinking of modeling assumptions. In this study, we examine the conventional practice of loading the leaflets only and not the chordae in order to study the structural response of the mitral valve apparatus. Specifically, we demonstrate that fully resolved 3D models of the mitral valve require a fluid-structure interaction analysis to correctly load the valve even in the case of quasi-static mechanics. While a fluid-structure interaction mode is still more computationally expensive than a structural-only model, we also show that advances in GPU computing have made such models tractable. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less

  3. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.

  4. Incorporating Uncertainty into Spacecraft Mission and Trajectory Design

    NASA Astrophysics Data System (ADS)

    Juliana D., Feldhacker

    The complex nature of many astrodynamic systems often leads to high computational costs or degraded accuracy in the analysis and design of spacecraft missions, and the incorporation of uncertainty into the trajectory optimization process often becomes intractable. This research applies mathematical modeling techniques to reduce computational cost and improve tractability for design, optimization, uncertainty quantication (UQ) and sensitivity analysis (SA) in astrodynamic systems and develops a method for trajectory optimization under uncertainty (OUU). This thesis demonstrates the use of surrogate regression models and polynomial chaos expansions for the purpose of design and UQ in the complex three-body system. Results are presented for the application of the models to the design of mid-eld rendezvous maneuvers for spacecraft in three-body orbits. The models are shown to provide high accuracy with no a priori knowledge on the sample size required for convergence. Additionally, a method is developed for the direct incorporation of system uncertainties into the design process for the purpose of OUU and robust design; these methods are also applied to the rendezvous problem. It is shown that the models can be used for constrained optimization with orders of magnitude fewer samples than is required for a Monte Carlo approach to the same problem. Finally, this research considers an application for which regression models are not well-suited, namely UQ for the kinetic de ection of potentially hazardous asteroids under the assumptions of real asteroid shape models and uncertainties in the impact trajectory and the surface material properties of the asteroid, which produce a non-smooth system response. An alternate set of models is presented that enables analytic computation of the uncertainties in the imparted momentum from impact. Use of these models for a survey of asteroids allows conclusions to be drawn on the eects of an asteroid's shape on the ability to successfully divert the asteroid via kinetic impactor.

  5. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  6. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    PubMed

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  7. Harnessing the hygroscopic and biofluorescent behaviors of genetically tractable microbial cells to design biohybrid wearables.

    PubMed

    Wang, Wen; Yao, Lining; Cheng, Chin-Yi; Zhang, Teng; Atsumi, Hiroshi; Wang, Luda; Wang, Guanyun; Anilionyte, Oksana; Steiner, Helene; Ou, Jifei; Zhou, Kang; Wawrousek, Chris; Petrecca, Katherine; Belcher, Angela M; Karnik, Rohit; Zhao, Xuanhe; Wang, Daniel I C; Ishii, Hiroshi

    2017-05-01

    Cells' biomechanical responses to external stimuli have been intensively studied but rarely implemented into devices that interact with the human body. We demonstrate that the hygroscopic and biofluorescent behaviors of living cells can be engineered to design biohybrid wearables, which give multifunctional responsiveness to human sweat. By depositing genetically tractable microbes on a humidity-inert material to form a heterogeneous multilayered structure, we obtained biohybrid films that can reversibly change shape and biofluorescence intensity within a few seconds in response to environmental humidity gradients. Experimental characterization and mechanical modeling of the film were performed to guide the design of a wearable running suit and a fluorescent shoe prototype with bio-flaps that dynamically modulates ventilation in synergy with the body's need for cooling.

  8. Rendering the Intractable More Tractable: Tools from Caenorhabditis elegans Ripe for Import into Parasitic Nematodes

    PubMed Central

    Ward, Jordan D.

    2015-01-01

    Recent and rapid advances in genetic and molecular tools have brought spectacular tractability to Caenorhabditis elegans, a model that was initially prized because of its simple design and ease of imaging. C. elegans has long been a powerful model in biomedical research, and tools such as RNAi and the CRISPR/Cas9 system allow facile knockdown of genes and genome editing, respectively. These developments have created an additional opportunity to tackle one of the most debilitating burdens on global health and food security: parasitic nematodes. I review how development of nonparasitic nematodes as genetic models informs efforts to import tools into parasitic nematodes. Current tools in three commonly studied parasites (Strongyloides spp., Brugia malayi, and Ascaris suum) are described, as are tools from C. elegans that are ripe for adaptation and the benefits and barriers to doing so. These tools will enable dissection of a huge array of questions that have been all but completely impenetrable to date, allowing investigation into host–parasite and parasite–vector interactions, and the genetic basis of parasitism. PMID:26644478

  9. Yeast: An Experimental Organism for Modern Biology.

    ERIC Educational Resources Information Center

    Botstein, David; Fink, Gerald R.

    1988-01-01

    Discusses the applicability and advantages of using yeasts as popular and ideal model systems for studying and understanding eukaryotic biology at the cellular and molecular levels. Cites experimental tractability and the cooperative tradition of the research community of yeast biologists as reasons for this success. (RT)

  10. Experimental design for dynamics identification of cellular processes.

    PubMed

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  11. Multimodal Image Analysis in Alzheimer’s Disease via Statistical Modelling of Non-local Intensity Correlations

    NASA Astrophysics Data System (ADS)

    Lorenzi, Marco; Simpson, Ivor J.; Mendelson, Alex F.; Vos, Sjoerd B.; Cardoso, M. Jorge; Modat, Marc; Schott, Jonathan M.; Ourselin, Sebastien

    2016-04-01

    The joint analysis of brain atrophy measured with magnetic resonance imaging (MRI) and hypometabolism measured with positron emission tomography with fluorodeoxyglucose (FDG-PET) is of primary importance in developing models of pathological changes in Alzheimer’s disease (AD). Most of the current multimodal analyses in AD assume a local (spatially overlapping) relationship between MR and FDG-PET intensities. However, it is well known that atrophy and hypometabolism are prominent in different anatomical areas. The aim of this work is to describe the relationship between atrophy and hypometabolism by means of a data-driven statistical model of non-overlapping intensity correlations. For this purpose, FDG-PET and MRI signals are jointly analyzed through a computationally tractable formulation of partial least squares regression (PLSR). The PLSR model is estimated and validated on a large clinical cohort of 1049 individuals from the ADNI dataset. Results show that the proposed non-local analysis outperforms classical local approaches in terms of predictive accuracy while providing a plausible description of disease dynamics: early AD is characterised by non-overlapping temporal atrophy and temporo-parietal hypometabolism, while the later disease stages show overlapping brain atrophy and hypometabolism spread in temporal, parietal and cortical areas.

  12. Hierarchical Probabilistic Inference of Cosmic Shear

    NASA Astrophysics Data System (ADS)

    Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin

    2015-07-01

    Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.

  13. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    PubMed

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  14. Applying Dynamic Priority Scheduling Scheme to Static Systems of Pinwheel Task Model in Power-Aware Scheduling

    PubMed Central

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126

  15. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  16. A Probabilistic Model of Illegal Drug Trafficking Operations in the Eastern Pacific and Caribbean Sea

    DTIC Science & Technology

    2013-09-01

    partner agencies and nations, detects, tracks, and interdicts illegal drug-trafficking in this region. In this thesis, we develop a probability model based...trafficking in this region. In this thesis, we develop a probability model based on intelligence inputs to generate a spatial temporal heat map specifying the...complement and vet such complicated simulation by developing more analytically tractable models. We develop probability models to generate a heat map

  17. Integrative approaches for modeling regulation and function of the respiratory system.

    PubMed

    Ben-Tal, Alona; Tawhai, Merryn H

    2013-01-01

    Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system-which comprises the lungs and the neural circuitry that controls their ventilation-have been derived using simplifying assumptions to compartmentalize each component of the system and to define the interactions between components. These full system models often rely-through necessity-on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially distributed models of ventilation and perfusion, or multicircuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. Copyright © 2013 Wiley Periodicals, Inc.

  18. Seven challenges for metapopulation models of epidemics, including households models.

    PubMed

    Ball, Frank; Britton, Tom; House, Thomas; Isham, Valerie; Mollison, Denis; Pellis, Lorenzo; Scalia Tomba, Gianpaolo

    2015-03-01

    This paper considers metapopulation models in the general sense, i.e. where the population is partitioned into sub-populations (groups, patches,...), irrespective of the biological interpretation they have, e.g. spatially segregated large sub-populations, small households or hosts themselves modelled as populations of pathogens. This framework has traditionally provided an attractive approach to incorporating more realistic contact structure into epidemic models, since it often preserves analytic tractability (in stochastic as well as deterministic models) but also captures the most salient structural inhomogeneity in contact patterns in many applied contexts. Despite the progress that has been made in both the theory and application of such metapopulation models, we present here several major challenges that remain for future work, focusing on models that, in contrast to agent-based ones, are amenable to mathematical analysis. The challenges range from clarifying the usefulness of systems of weakly-coupled large sub-populations in modelling the spread of specific diseases to developing a theory for endemic models with household structure. They include also developing inferential methods for data on the emerging phase of epidemics, extending metapopulation models to more complex forms of human social structure, developing metapopulation models to reflect spatial population structure, developing computationally efficient methods for calculating key epidemiological model quantities, and integrating within- and between-host dynamics in models. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Shifting the Computational Paradigm

    DTIC Science & Technology

    2004-10-01

    Classifier. SIGART Bulletin 2 (3): 88-92 (1991) 23. F. Donini , M . Lenzerini, D. Nardi, and W. Nutt. `The Complexity of Concept Languages’, KR-91, pp 151...162, 1991. 24. F. Donini , M . Lenzerini, D. Nardi, and W. Nutt. Tractable concept languages’, IJCAI-91, pp 458-465, 1991. 25. O. Lassila, “Web...UnambiguousProperty, then if P(x, y) and P(z, y) then x=z. aka injective. e.g. if nameOfMonth( m , "Feb") and nameOfMonth(n, "Feb") then m and n are the same

  20. Landscape Encodings Enhance Optimization

    PubMed Central

    Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.

    2012-01-01

    Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860

  1. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE PAGES

    Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...

    2017-12-28

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  2. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moon, Jae; Manuel, Lance; Churchfield, Matthew

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  3. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    PubMed

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  4. Assessing the system value of optimal load shifting

    DOE PAGES

    Merrick, James; Ye, Yinyu; Entriken, Bob

    2017-04-30

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  5. Assessing the system value of optimal load shifting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrick, James; Ye, Yinyu; Entriken, Bob

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  6. Generalized activity equations for spiking neural network dynamics.

    PubMed

    Buice, Michael A; Chow, Carson C

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  7. Different Parameters Support Generalization and Discrimination Learning in "Drosophila" at the Flight Simulator

    ERIC Educational Resources Information Center

    Brembs, Bjorn; de Ibarra, Natalie Hempel

    2006-01-01

    We have used a genetically tractable model system, the fruit fly "Drosophila melanogaster" to study the interdependence between sensory processing and associative processing on learning performance. We investigated the influence of variations in the physical and predictive properties of color stimuli in several different operant-conditioning…

  8. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  9. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  10. Underworld results as a triple (shopping list, posterior, priors)

    NASA Astrophysics Data System (ADS)

    Quenette, S. M.; Moresi, L. N.; Abramson, D.

    2013-12-01

    When studying long-term lithosphere deformation and other such large-scale, spatially distinct and behaviour rich problems, there is a natural trade-off between the meaning of a model, the observations used to validate the model and the ability to compute over this space. For example, many models of varying lithologies, rheological properties and underlying physics may reasonably match (or not match) observables. To compound this problem, each realisation is computationally intensive, requiring high resolution, algorithm tuning and code tuning to contemporary computer hardware. It is often intractable to use sampling based assimilation methods, but with better optimisation, the window of tractability becomes wider. The ultimate goal is to find a sweet-spot where a formal assimilation method is used, and where a model affines to observations. Its natural to think of this as an inverse problem, in which the underlying physics may be fixed and the rheological properties and possibly the lithologies themselves are unknown. What happens when we push this approach and treat some portion of the underlying physics as an unknown? At its extreme this is an intractable problem. However, there is an analogy here with how we develop software for these scientific problems. What happens when we treat the changing part of a largely complete code as an unknown, where the changes are working towards this sweet-spot? When posed as a Bayesian inverse problem the result is a triple - the model changes, the real priors and the real posterior. Not only does this give meaning to the process by which a code changes, it forms a mathematical bridge from an inverse problem to compiler optimisations given such changes. As a stepping stone example we show a regional scale heat flow model with constraining observations, and the inverse process including increasingly complexity in the software. The implementation uses Underworld-GT (Underworld plus research extras to import geology and export geothermic measures, etc). Underworld uses StGermain an early (partial) implementation of the theories described here.

  11. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  12. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial intelligence. In the years ahead, this may well become a widely held position.

  13. GeneImp: Fast Imputation to Large Reference Panels Using Genotype Likelihoods from Ultralow Coverage Sequencing

    PubMed Central

    Spiliopoulou, Athina; Colombo, Marco; Orchard, Peter; Agakov, Felix; McKeigue, Paul

    2017-01-01

    We address the task of genotype imputation to a dense reference panel given genotype likelihoods computed from ultralow coverage sequencing as inputs. In this setting, the data have a high-level of missingness or uncertainty, and are thus more amenable to a probabilistic representation. Most existing imputation algorithms are not well suited for this situation, as they rely on prephasing for computational efficiency, and, without definite genotype calls, the prephasing task becomes computationally expensive. We describe GeneImp, a program for genotype imputation that does not require prephasing and is computationally tractable for whole-genome imputation. GeneImp does not explicitly model recombination, instead it capitalizes on the existence of large reference panels—comprising thousands of reference haplotypes—and assumes that the reference haplotypes can adequately represent the target haplotypes over short regions unaltered. We validate GeneImp based on data from ultralow coverage sequencing (0.5×), and compare its performance to the most recent version of BEAGLE that can perform this task. We show that GeneImp achieves imputation quality very close to that of BEAGLE, using one to two orders of magnitude less time, without an increase in memory complexity. Therefore, GeneImp is the first practical choice for whole-genome imputation to a dense reference panel when prephasing cannot be applied, for instance, in datasets produced via ultralow coverage sequencing. A related future application for GeneImp is whole-genome imputation based on the off-target reads from deep whole-exome sequencing. PMID:28348060

  14. Evaluation of invertebrate infection models for pathogenic corynebacteria.

    PubMed

    Ott, Lisa; McKenzie, Ashleigh; Baltazar, Maria Teresa; Britting, Sabine; Bischof, Andrea; Burkovski, Andreas; Hoskisson, Paul A

    2012-08-01

    For several pathogenic bacteria, model systems for host-pathogen interactions were developed, which provide the possibility of quick and cost-effective high throughput screening of mutant bacteria for genes involved in pathogenesis. A number of different model systems, including amoeba, nematodes, insects, and fish, have been introduced, and it was observed that different bacteria respond in different ways to putative surrogate hosts, and distinct model systems might be more or less suitable for a certain pathogen. The aim of this study was to develop a suitable invertebrate model for the human and animal pathogens Corynebacterium diphtheriae, Corynebacterium pseudotuberculosis, and Corynebacterium ulcerans. The results obtained in this study indicate that Acanthamoeba polyphaga is not optimal as surrogate host, while both Caenorhabtitis elegans and Galleria larvae seem to offer tractable models for rapid assessment of virulence between strains. Caenorhabtitis elegans gives more differentiated results and might be the best model system for pathogenic corynebacteria, given the tractability of bacteria and the range of mutant nematodes available to investigate the host response in combination with bacterial virulence. Nevertheless, Galleria will also be useful in respect to innate immune responses to pathogens because insects offer a more complex cell-based innate immune system compared with the simple innate immune system of C. elegans. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  15. Toward regional-scale adjoint tomography in the deep earth

    NASA Astrophysics Data System (ADS)

    Masson, Y.; Romanowicz, B. A.

    2013-12-01

    Thanks to the development of efficient numerical computation methods, such as the Spectral Element Method (SEM) and to the increasing power of computer clusters, it is now possible to obtain regional-scale images of the Earth's interior using adjoint-tomography (e.g. Tape, C., et al., 2009). As for now, these tomographic models are limited to the upper layers of the earth, i.e., they provide us with high-resolution images of the crust and the upper part of the mantle. Given the gigantic amount of calculation it represents, obtaing similar models at the global scale (i.e. images of the entire Earth) seems out of reach at the moment. Furthermore, it's likely that the first generation of such global adjoint tomographic models will have a resolution significantly smaller than the current regional models. In order to image regions of interests in the deep Earth, such as plumes, slabs or large low shear velocity provinces (LLSVPs), while keeping the computation tractable, we are developing new tools that will allow us to perform regional-scale adjoint-tomography at arbitrary depths. In a recent study (Masson et al., 2013), we showed that a numerical equivalent of the time reversal mirrors used in experimental acoustics permits to confine the wave propagation computations (i.e. using SEM simulations) inside the region to be imaged. With this ability to limit wave propagation modeling inside a region of interest, obtaining the adjoint sensitivity kernels needed for tomographic imaging is only two steps further. First, the local wavefield modeling needs to be coupled with field extrapolation techniques in order to obtain synthetic seismograms at the surface of the earth. These seismograms will account for the 3D structure inside the region of interest in a quasi-exact manner. We will present preliminary results where the field-extrapolation is performed using Green's function computed in a 1D Earth model thanks to the Direct Solution Method (DSM). Once synthetic seismograms can be obtained, it is possible to evaluate the misfit between observed and computed seismograms. The second step will then be to extrapolate the misfit function back into the SEM region in order to compute local adjoint sensitivity kernels. When available, these kernels will allow us to perform regional-scale adjoint tomography at arbitrary locations inside the earth. Masson Y., Cupillard P., Capdeville Y., & Romanowicz B., 2013. On the numerical implementation of time-reversal mirrors for tomographic imaging, Journal of Geophysical Research (under review). Tape, C., et al. (2009). "Adjoint tomography of the southern California crust." Science 325(5943): 988-992.

  16. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    NASA Astrophysics Data System (ADS)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  17. Fully Coupled Nonlinear Fluid Flow and Poroelasticity in Arbitrarily Fractured Porous Media: A Hybrid-Dimensional Computational Model

    NASA Astrophysics Data System (ADS)

    Jin, L.; Zoback, M. D.

    2017-10-01

    We formulate the problem of fully coupled transient fluid flow and quasi-static poroelasticity in arbitrarily fractured, deformable porous media saturated with a single-phase compressible fluid. The fractures we consider are hydraulically highly conductive, allowing discontinuous fluid flux across them; mechanically, they act as finite-thickness shear deformation zones prior to failure (i.e., nonslipping and nonpropagating), leading to "apparent discontinuity" in strain and stress across them. Local nonlinearity arising from pressure-dependent permeability of fractures is also included. Taking advantage of typically high aspect ratio of a fracture, we do not resolve transversal variations and instead assume uniform flow velocity and simple shear strain within each fracture, rendering the coupled problem numerically more tractable. Fractures are discretized as lower dimensional zero-thickness elements tangentially conforming to unstructured matrix elements. A hybrid-dimensional, equal-low-order, two-field mixed finite element method is developed, which is free from stability issues for a drained coupled system. The fully implicit backward Euler scheme is employed for advancing the fully coupled solution in time, and the Newton-Raphson scheme is implemented for linearization. We show that the fully discretized system retains a canonical form of a fracture-free poromechanical problem; the effect of fractures is translated to the modification of some existing terms as well as the addition of several terms to the capacity, conductivity, and stiffness matrices therefore allowing the development of independent subroutines for treating fractures within a standard computational framework. Our computational model provides more realistic inputs for some fracture-dominated poromechanical problems like fluid-induced seismicity.

  18. Geomagnetic Cutoff Rigidity Computer Program: Theory, Software Description and Example

    NASA Technical Reports Server (NTRS)

    Smart, D. F.; Shea, M. A.

    2001-01-01

    The access of charged particles to the earth from space through the geomagnetic field has been of interest since the discovery of the cosmic radiation. The early cosmic ray measurements found that cosmic ray intensity was ordered by the magnetic latitude and the concept of cutoff rigidity was developed. The pioneering work of Stoermer resulted in the theory of particle motion in the geomagnetic field, but the fundamental mathematical equations developed have 'no solution in closed form'. This difficulty has forced researchers to use the 'brute force' technique of numerical integration of individual trajectories to ascertain the behavior of trajectory families or groups. This requires that many of the trajectories must be traced in order to determine what energy (or rigidity) a charged particle must have to penetrate the magnetic field and arrive at a specified position. It turned out the cutoff rigidity was not a simple quantity but had many unanticipated complexities that required many hundreds if not thousands of individual trajectory calculations to solve. The accurate calculation of particle trajectories in the earth's magnetic field is a fundamental problem that limited the efficient utilization of cosmic ray measurements during the early years of cosmic ray research. As the power of computers has improved over the decades, the numerical integration procedure has grown more tractable, and magnetic field models of increasing accuracy and complexity have been utilized. This report is documentation of a general FORTRAN computer program to trace the trajectory of a charged particle of a specified rigidity from a specified position and direction through a model of the geomagnetic field.

  19. Broad AOX expression in a genetically tractable mouse model does not disturb normal physiology

    PubMed Central

    Szibor, Marten; Dhandapani, Praveen K.; Dufour, Eric; Holmström, Kira M.; Zhuang, Yuan; Salwig, Isabelle; Wittig, Ilka; Heidler, Juliana; Gizatullina, Zemfira; Fuchs, Helmut; Gailus-Durner, Valérie; de Angelis, Martin Hrabě; Nandania, Jatin; Velagapudi, Vidya; Wietelmann, Astrid; Rustin, Pierre; Gellerich, Frank N.; Braun, Thomas

    2017-01-01

    ABSTRACT Plants and many lower organisms, but not mammals, express alternative oxidases (AOXs) that branch the mitochondrial respiratory chain, transferring electrons directly from ubiquinol to oxygen without proton pumping. Thus, they maintain electron flow under conditions when the classical respiratory chain is impaired, limiting excess production of oxygen radicals and supporting redox and metabolic homeostasis. AOX from Ciona intestinalis has been used to study and mitigate mitochondrial impairments in mammalian cell lines, Drosophila disease models and, most recently, in the mouse, where multiple lentivector-AOX transgenes conferred substantial expression in specific tissues. Here, we describe a genetically tractable mouse model in which Ciona AOX has been targeted to the Rosa26 locus for ubiquitous expression. The AOXRosa26 mouse exhibited only subtle phenotypic effects on respiratory complex formation, oxygen consumption or the global metabolome, and showed an essentially normal physiology. AOX conferred robust resistance to inhibitors of the respiratory chain in organello; moreover, animals exposed to a systemically applied LD50 dose of cyanide did not succumb. The AOXRosa26 mouse is a useful tool to investigate respiratory control mechanisms and to decipher mitochondrial disease aetiology in vivo. PMID:28067626

  20. Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.

    PubMed

    Yamazaki, Keisuke; Kaji, Daisuke

    2013-08-01

    Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Tractable Analysis for Large Social Networks

    ERIC Educational Resources Information Center

    Zhang, Bin

    2012-01-01

    Social scientists usually are more interested in consumers' dichotomous choice, such as purchase a product or not, adopt a technology or not, etc. However, up to date, there is nearly no model can help us solve the problem of multi-network effects comparison with a dichotomous dependent variable. Furthermore, the study of multi-network…

  2. Functional Imaging and Optogenetics in Drosophila

    PubMed Central

    Simpson, Julie H.; Looger, Loren L.

    2018-01-01

    Understanding how activity patterns in specific neural circuits coordinate an animal’s behavior remains a key area of neuroscience research. Genetic tools and a brain of tractable complexity make Drosophila a premier model organism for these studies. Here, we review the wealth of reagents available to map and manipulate neuronal activity with light. PMID:29618589

  3. Integrating Model-Based Verification into Software Design Education

    ERIC Educational Resources Information Center

    Yilmaz, Levent; Wang, Shuo

    2005-01-01

    Proper design analysis is indispensable to assure quality and reduce emergent costs due to faulty software. Teaching proper design verification skills early during pedagogical development is crucial, as such analysis is the only tractable way of resolving software problems early when they are easy to fix. The premise of the presented strategy is…

  4. Reply to comment by Jonathan J. Rhodes on "Modeling of the interactions between forest vegetation, disturbances, and sediment yields"

    Treesearch

    Charles H. Luce; David G. Tarboton; Erkan Istanbulluoglu; Robert T. Pack

    2005-01-01

    Rhodes [2005] brings up some excellent points in his comments on the work of Istanbulluoglu et al. [2004]. We appreciate the opportunity to respond because it is likely that other readers will also wonder how they can apply the relatively simple analysis to important policy questions. Models necessarily reduce the complexity of the problem to make it tractable and...

  5. Quantification of histone modification ChIP-seq enrichment for data mining and machine learning applications

    PubMed Central

    2011-01-01

    Background The advent of ChIP-seq technology has made the investigation of epigenetic regulatory networks a computationally tractable problem. Several groups have applied statistical computing methods to ChIP-seq datasets to gain insight into the epigenetic regulation of transcription. However, methods for estimating enrichment levels in ChIP-seq data for these computational studies are understudied and variable. Since the conclusions drawn from these data mining and machine learning applications strongly depend on the enrichment level inputs, a comparison of estimation methods with respect to the performance of statistical models should be made. Results Various methods were used to estimate the gene-wise ChIP-seq enrichment levels for 20 histone methylations and the histone variant H2A.Z. The Multivariate Adaptive Regression Splines (MARS) algorithm was applied for each estimation method using the estimation of enrichment levels as predictors and gene expression levels as responses. The methods used to estimate enrichment levels included tag counting and model-based methods that were applied to whole genes and specific gene regions. These methods were also applied to various sizes of estimation windows. The MARS model performance was assessed with the Generalized Cross-Validation Score (GCV). We determined that model-based methods of enrichment estimation that spatially weight enrichment based on average patterns provided an improvement over tag counting methods. Also, methods that included information across the entire gene body provided improvement over methods that focus on a specific sub-region of the gene (e.g., the 5' or 3' region). Conclusion The performance of data mining and machine learning methods when applied to histone modification ChIP-seq data can be improved by using data across the entire gene body, and incorporating the spatial distribution of enrichment. Refinement of enrichment estimation ultimately improved accuracy of model predictions. PMID:21834981

  6. Use of mouse models to study the mechanisms and consequences of RBC clearance

    PubMed Central

    Hod, E. A.; Arinsburg, S. A.; Francis, R. O.; Hendrickson, J. E.; Zimring, J. C.; Spitalnik, S. L.

    2013-01-01

    Mice provide tractable animal models for studying the pathophysiology of various human disorders. This review discusses the use of mouse models for understanding red-blood-cell (RBC) clearance. These models provide important insights into the pathophysiology of various clinically relevant entities, such as autoimmune haemolytic anaemia, haemolytic transfusion reactions, other complications of RBC transfusions and immunomodulation by Rh immune globulin therapy. Mouse models of both antibody- and non-antibody-mediated RBC clearance are reviewed. Approaches for exploring unanswered questions in transfusion medicine using these models are also discussed. PMID:20345515

  7. Tractable Goal Selection with Oversubscribed Resources

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2009-01-01

    We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  8. Screened hybrid density functionals for solid-state chemistry and physics.

    PubMed

    Janesko, Benjamin G; Henderson, Thomas M; Scuseria, Gustavo E

    2009-01-21

    Density functional theory incorporating hybrid exchange-correlation functionals has been extraordinarily successful in providing accurate, computationally tractable treatments of molecular properties. However, conventional hybrid functionals can be problematic for solids. Their nonlocal, Hartree-Fock-like exchange term decays slowly and incorporates unphysical features in metals and narrow-bandgap semiconductors. This article provides an overview of our group's work on designing hybrid functionals for solids. We focus on the Heyd-Scuseria-Ernzerhof screened hybrid functional [J. Chem. Phys. 2003, 118, 8207], its applications to the chemistry and physics of solids and surfaces, and our efforts to build upon its successes.

  9. The Quality of the Embedding Potential Is Decisive for Minimal Quantum Region Size in Embedding Calculations: The Case of the Green Fluorescent Protein.

    PubMed

    Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob

    2017-12-12

    The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.

  10. Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.

  11. Dynamics of history-dependent epidemics in temporal networks

    NASA Astrophysics Data System (ADS)

    Sunny, Albert; Kotnis, Bhushan; Kuri, Joy

    2015-08-01

    The structural properties of temporal networks often influence the dynamical processes that occur on these networks, e.g., bursty interaction patterns have been shown to slow down epidemics. In this paper, we investigate the effect of link lifetimes on the spread of history-dependent epidemics. We formulate an analytically tractable activity-driven temporal network model that explicitly incorporates link lifetimes. For Markovian link lifetimes, we use mean-field analysis for computing the epidemic threshold, while the effect of non-Markovian link lifetimes is studied using simulations. Furthermore, we also study the effect of negative correlation between the number of links spawned by an individual and the lifetimes of those links. Such negative correlations may arise due to the finite cognitive capacity of the individuals. Our investigations reveal that heavy-tailed link lifetimes slow down the epidemic, while negative correlations can reduce epidemic prevalence. We believe that our results help shed light on the role of link lifetimes in modulating diffusion processes on temporal networks.

  12. Plant Comparative and Functional Genomics

    DOE PAGES

    Yang, Xiaohan; Leebens-Mack, Jim; Chen, Feng; ...

    2015-01-01

    Plants form the foundation for our global ecosystem and are essential for environmental and human health. An increasing number of available plant genomes and tractable experimental systems, comparative and functional plant genomics research is greatly expanding our knowledge of the molecular basis of economically and nutritionally important traits in crop plants. Inferences drawn from comparative genomics are motivating experimental investigations of gene function and gene interactions. In this special issue aims to highlight recent advances made in comparative and functional genomics research in plants. Nine original research articles in this special issue cover five important topics: (1) transcription factor genemore » families relevant to abiotic stress tolerance; (2) plant secondary metabolism; (3) transcriptomebased markers for quantitative trait locus; (4) epigenetic modifications in plant-microbe interactions; and (5) computational prediction of protein-protein interactions. Finally, we studied the plant species in these articles which include model species as well as nonmodel plant species of economic importance (e.g., food crops and medicinal plants).« less

  13. Plant Comparative and Functional Genomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaohan; Leebens-Mack, Jim; Chen, Feng

    Plants form the foundation for our global ecosystem and are essential for environmental and human health. An increasing number of available plant genomes and tractable experimental systems, comparative and functional plant genomics research is greatly expanding our knowledge of the molecular basis of economically and nutritionally important traits in crop plants. Inferences drawn from comparative genomics are motivating experimental investigations of gene function and gene interactions. In this special issue aims to highlight recent advances made in comparative and functional genomics research in plants. Nine original research articles in this special issue cover five important topics: (1) transcription factor genemore » families relevant to abiotic stress tolerance; (2) plant secondary metabolism; (3) transcriptomebased markers for quantitative trait locus; (4) epigenetic modifications in plant-microbe interactions; and (5) computational prediction of protein-protein interactions. Finally, we studied the plant species in these articles which include model species as well as nonmodel plant species of economic importance (e.g., food crops and medicinal plants).« less

  14. Jump state estimation with multiple sensors with packet dropping and delaying channels

    NASA Astrophysics Data System (ADS)

    Dolz, Daniel; Peñarrocha, Ignacio; Sanchis, Roberto

    2016-03-01

    This work addresses the design of a state observer for systems whose outputs are measured through a communication network. The measurements from each sensor node are assumed to arrive randomly, scarcely and with a time-varying delay. The proposed model of the plant and the network measurement scenarios cover the cases of multiple sensors, out-of-sequence measurements, buffered measurements on a single packet and multirate sensor measurements. A jump observer is proposed that selects a different gain depending on the number of periods elapsed between successfully received measurements and on the available data. A finite set of gains is pre-calculated offline with a tractable optimisation problem, where the complexity of the observer implementation is a design parameter. The computational cost of the observer implementation is much lower than in the Kalman filter, whilst the performance is similar. Several examples illustrate the observer design for different measurement scenarios and observer complexity and show the achievable performance.

  15. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  16. HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Michael D.; Dawson, William A.; Hogg, David W.

    2015-07-01

    Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxymore » properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.« less

  17. Building Systems from Scratch: an Exploratory Study of Students Learning About Climate Change

    NASA Astrophysics Data System (ADS)

    Puttick, Gillian; Tucker-Raymond, Eli

    2018-01-01

    Science and computational practices such as modeling and abstraction are critical to understanding the complex systems that are integral to climate science. Given the demonstrated affordances of game design in supporting such practices, we implemented a free 4-day intensive workshop for middle school girls that focused on using the visual programming environment, Scratch, to design games to teach others about climate change. The experience was carefully constructed so that girls of widely differing levels of experience were able to engage in a cycle of game design. This qualitative study aimed to explore the representational choices the girls made as they took up aspects of climate change systems and modeled them in their games. Evidence points to the ways in which designing games about climate science fostered emergent systems thinking and engagement in modeling practices as learners chose what to represent in their games, grappled with the realism of their respective representations, and modeled interactions among systems components. Given the girls' levels of programming skill, parts of systems were more tractable to create than others. The educational purpose of the games was important to the girls' overall design experience, since it influenced their choice of topic, and challenged their emergent understanding of climate change as a systems problem.

  18. LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods

    NASA Technical Reports Server (NTRS)

    Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.

    2017-01-01

    One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.

  19. A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Cioaca, Alexandru

    A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.

  20. Eigenvalue Tests and Distributions for Small Sample Order Determination for Complex Wishart Matrices

    DTIC Science & Technology

    1994-08-13

    theoretic order determination criteria for ARMA(p, q) models can be expressed in the form of equation 4.2. The word ARIMA should not be a distractor...subjectivity is not necessarily bad. It enables us to build tractable models and efficiently achieve reasonable results. The charge of "subjectivity" lodged...signal processing studies because it simplifies the mathematics involved and it is not a bad model for a wide range of situations. Wooding [293] is

  1. Programming strategy for efficient modeling of dynamics in a population of heterogeneous cells.

    PubMed

    Hald, Bjørn Olav; Garkier Hendriksen, Morten; Sørensen, Preben Graae

    2013-05-15

    Heterogeneity is a ubiquitous property of biological systems. Even in a genetically identical population of a single cell type, cell-to-cell differences are observed. Although the functional behavior of a given population is generally robust, the consequences of heterogeneity are fairly unpredictable. In heterogeneous populations, synchronization of events becomes a cardinal problem-particularly for phase coherence in oscillating systems. The present article presents a novel strategy for construction of large-scale simulation programs of heterogeneous biological entities. The strategy is designed to be tractable, to handle heterogeneity and to handle computational cost issues simultaneously, primarily by writing a generator of the 'model to be simulated'. We apply the strategy to model glycolytic oscillations among thousands of yeast cells coupled through the extracellular medium. The usefulness is illustrated through (i) benchmarking, showing an almost linear relationship between model size and run time, and (ii) analysis of the resulting simulations, showing that contrary to the experimental situation, synchronous oscillations are surprisingly hard to achieve, underpinning the need for tools to study heterogeneity. Thus, we present an efficient strategy to model the biological heterogeneity, neglected by ordinary mean-field models. This tool is well posed to facilitate the elucidation of the physiologically vital problem of synchronization. The complete python code is available as Supplementary Information. bjornhald@gmail.com or pgs@kiku.dk Supplementary data are available at Bioinformatics online.

  2. A tractable and accurate electronic structure method for static correlations: The perfect hextuples model

    NASA Astrophysics Data System (ADS)

    Parkhill, John A.; Head-Gordon, Martin

    2010-07-01

    We present the next stage in a hierarchy of local approximations to complete active space self-consistent field (CASSCF) model in an active space of one active orbital per active electron based on the valence orbital-optimized coupled-cluster (VOO-CC) formalism. Following the perfect pairing (PP) model, which is exact for a single electron pair and extensive, and the perfect quadruples (PQ) model, which is exact for two pairs, we introduce the perfect hextuples (PH) model, which is exact for three pairs. PH is an approximation to the VOO-CC method truncated at hextuples containing all correlations between three electron pairs. While VOO-CCDTQ56 requires computational effort scaling with the 14th power of molecular size, PH requires only sixth power effort. Our implementation also introduces some techniques which reduce the scaling to fifth order and has been applied to active spaces roughly twice the size of the CASSCF limit without any symmetry. Because PH explicitly correlates up to six electrons at a time, it can faithfully model the static correlations of molecules with up to triple bonds in a size-consistent fashion and for organic reactions usually reproduces CASSCF with chemical accuracy. The convergence of the PP, PQ, and PH hierarchy is demonstrated on a variety of examples including symmetry breaking in benzene, the Cope rearrangement, the Bergman reaction, and the dissociation of fluorine.

  3. Macroscopic modeling and simulations of supercoiled DNA with bound proteins

    NASA Astrophysics Data System (ADS)

    Huang, Jing; Schlick, Tamar

    2002-11-01

    General methods are presented for modeling and simulating DNA molecules with bound proteins on the macromolecular level. These new approaches are motivated by the need for accurate and affordable methods to simulate slow processes (on the millisecond time scale) in DNA/protein systems, such as the large-scale motions involved in the Hin-mediated inversion process. Our approaches, based on the wormlike chain model of long DNA molecules, introduce inhomogeneous potentials for DNA/protein complexes based on available atomic-level structures. Electrostatically, treat those DNA/protein complexes as sets of effective charges, optimized by our discrete surface charge optimization package, in which the charges are distributed on an excluded-volume surface that represents the macromolecular complex. We also introduce directional bending potentials as well as non-identical bead hydrodynamics algorithm to further mimic the inhomogeneous effects caused by protein binding. These models thus account for basic elements of protein binding effects on DNA local structure but remain computational tractable. To validate these models and methods, we reproduce various properties measured by both Monte Carlo methods and experiments. We then apply the developed models to study the Hin-mediated inversion system in long DNA. By simulating supercoiled, circular DNA with or without bound proteins, we observe significant effects of protein binding on global conformations and long-time dynamics of the DNA on the kilo basepair length.

  4. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach

    PubMed Central

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research. PMID:27706185

  5. Elucidation of Prion Protein Conformational Changes Associated with Infectivity by Fluorescence Spectroscopy

    DTIC Science & Technology

    2006-06-01

    This work was supported by grants to MAM from the US Army Research (DAMD17- 03-1-0342) and the NIH COBRE program (P20 RR020185-01). Authors...and more tractable model for PrPSc. This is further supported by an intriguing discovery made by Caughey and coworkers in their search to

  6. Using spatiotemporal statistical models to estimate animal abundance and infer ecological dynamics from survey counts

    USGS Publications Warehouse

    Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.

    2015-01-01

    Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total abundance produced estimates that largely conformed to our a priori expectation. Although care must be taken to tailor models to match the study population and survey data available, we argue that hierarchical spatiotemporal statistical models represent a powerful way forward for estimating abundance and explaining variation in the distribution of dynamical populations.

  7. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  8. Novel Gbeta Mimic Kelch Proteins (Gpb1 and Gpb2 Connect G-Protein Signaling to Ras via Yeast Neurofibromin Homologs Ira1 and Ira2. A Model for Human NF1

    DTIC Science & Technology

    2007-03-01

    Saccharomyces cerevisiae and model fungus Cryptococcus neoformans as models to understand how the GAP activity of the yeast neurofibromin homologs, Ira1...another genetically tractable fungal model system, Cryptococcus neoformans, and identified two kelch repeat homologs that are involved in mating (Kem1 and...Kem2). To find kelch-repeat proteins involved in G protein signaling, Cryptococcus homologues of Gpb1/2, which interacts with and negatively

  9. Stochastic simulation of spatially correlated geo-processes

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.

  10. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  11. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species with Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.

  12. Expanding Metabolic Engineering Algorithms Using Feasible Space and Shadow Price Constraint Modules

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2014-01-01

    While numerous computational methods have been developed that use genome-scale models to propose mutants for the purpose of metabolic engineering, they generally compare mutants based on a single criteria (e.g., production rate at a mutant’s maximum growth rate). As such, these approaches remain limited in their ability to include multiple complex engineering constraints. To address this shortcoming, we have developed feasible space and shadow price constraint (FaceCon and ShadowCon) modules that can be added to existing mixed integer linear adaptive evolution metabolic engineering algorithms, such as OptKnock and OptORF. These modules allow strain designs to be identified amongst a set of multiple metabolic engineering algorithm solutions that are capable of high chemical production while also satisfying additional design criteria. We describe the various module implementations and their potential applications to the field of metabolic engineering. We then incorporated these modules into the OptORF metabolic engineering algorithm. Using an Escherichia coli genome-scale model (iJO1366), we generated different strain designs for the anaerobic production of ethanol from glucose, thus demonstrating the tractability and potential utility of these modules in metabolic engineering algorithms. PMID:25478320

  13. Using genetic data to estimate diffusion rates in heterogeneous landscapes.

    PubMed

    Roques, L; Walker, E; Franck, P; Soubeyrand, S; Klein, E K

    2016-08-01

    Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction-diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation.

  14. A Unified Framework for Association Analysis with Multiple Related Phenotypes

    PubMed Central

    Stephens, Matthew

    2013-01-01

    We consider the problem of assessing associations between multiple related outcome variables, and a single explanatory variable of interest. This problem arises in many settings, including genetic association studies, where the explanatory variable is genotype at a genetic variant. We outline a framework for conducting this type of analysis, based on Bayesian model comparison and model averaging for multivariate regressions. This framework unifies several common approaches to this problem, and includes both standard univariate and standard multivariate association tests as special cases. The framework also unifies the problems of testing for associations and explaining associations – that is, identifying which outcome variables are associated with genotype. This provides an alternative to the usual, but conceptually unsatisfying, approach of resorting to univariate tests when explaining and interpreting significant multivariate findings. The method is computationally tractable genome-wide for modest numbers of phenotypes (e.g. 5–10), and can be applied to summary data, without access to raw genotype and phenotype data. We illustrate the methods on both simulated examples, and to a genome-wide association study of blood lipid traits where we identify 18 potential novel genetic associations that were not identified by univariate analyses of the same data. PMID:23861737

  15. Multicolor bleach-rate imaging enlightens in vivo sterol transport

    PubMed Central

    Sage, Daniel

    2010-01-01

    Elucidation of in vivo cholesterol transport and its aberrations in cardiovascular diseases requires suitable model organisms and the development of appropriate monitoring technology. We recently presented a new approach to visualize transport of the intrinsically fluorescent sterol, dehydroergosterol (DHE) in the genetically tractable model organism Caenorhabditis elegans (C. elegans). DHE is structurally very similar to cholesterol and ergosterol, two sterols used by the sterol-auxotroph nematode. We developed a new computational method measuring fluorophore bleaching kinetics at every pixel position, which can be used as a fingerprint to distinguish rapidly bleaching DHE from slowly bleaching autofluorescence in the animals. Here, we introduce multicolor bleach-rate sterol imaging. By this method, we demonstrate that some DHE is targeted to a population of basolateral recycling endosomes (RE) labelled with GFP-tagged RME-1 (GFP-RME-1) in the intestine of both, wild-type nematodes and mutant animals lacking intestinal gut granules (glo1-mutants). DHE-enriched intestinal organelles of glo1-mutants were decorated with GFPrme8, a marker for early endosomes. No co-localization was found with a lysosomal marker, GFP-LMP1. Our new methods hold great promise for further studies on endosomal sterol transport in C. elegans. PMID:20798830

  16. A Methodology for Evaluating the Hygroscopic Behavior of Wood in Adaptive Building Skins using Motion Grammar

    NASA Astrophysics Data System (ADS)

    El-Dabaa, Rana; Abdelmohsen, Sherif

    2018-05-01

    The challenge in designing kinetic architecture lies in the lack of applying computational design and human computer interaction to successfully design intelligent and interactive interfaces. The use of ‘programmable materials’ as specifically fabricated composite materials that afford motion upon stimulation is promising for low-cost low-tech systems for kinetic facades in buildings. Despite efforts to develop working prototypes, there has been no clear methodological framework for understanding and controlling the behavior of programmable materials or for using them for such purposes. This paper introduces a methodology for evaluating the motion acquired from programmed material – resulting from the hygroscopic behavior of wood – through ‘motion grammar’. Motion grammar typically allows for the explanation of desired motion control in a computationally tractable method. The paper analyzed and evaluated motion parameters related to the hygroscopic properties and behavior of wood, and introduce a framework for tracking and controlling wood as a programmable material for kinetic architecture.

  17. Assessment of time-dependent density functional theory with the restricted excitation space approximation for excited state calculations of large systems

    NASA Astrophysics Data System (ADS)

    Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.

    2018-06-01

    The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.

  18. Mathematical modelling of animate and intentional motion.

    PubMed Central

    Rittscher, Jens; Blake, Andrew; Hoogs, Anthony; Stein, Gees

    2003-01-01

    Our aim is to enable a machine to observe and interpret the behaviour of others. Mathematical models are employed to describe certain biological motions. The main challenge is to design models that are both tractable and meaningful. In the first part we will describe how computer vision techniques, in particular visual tracking, can be applied to recognize a small vocabulary of human actions in a constrained scenario. Mainly the problems of viewpoint and scale invariance need to be overcome to formalize a general framework. Hence the second part of the article is devoted to the question whether a particular human action should be captured in a single complex model or whether it is more promising to make extensive use of semantic knowledge and a collection of low-level models that encode certain motion primitives. Scene context plays a crucial role if we intend to give a higher-level interpretation rather than a low-level physical description of the observed motion. A semantic knowledge base is used to establish the scene context. This approach consists of three main components: visual analysis, the mapping from vision to language and the search of the semantic database. A small number of robust visual detectors is used to generate a higher-level description of the scene. The approach together with a number of results is presented in the third part of this article. PMID:12689374

  19. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  20. Computational Methodology for Absolute Calibration Curves for Microfluidic Optical Analyses

    PubMed Central

    Chang, Chia-Pin; Nagel, David J.; Zaghloul, Mona E.

    2010-01-01

    Optical fluorescence and absorption are two of the primary techniques used for analytical microfluidics. We provide a thorough yet tractable method for computing the performance of diverse optical micro-analytical systems. Sample sizes range from nano- to many micro-liters and concentrations from nano- to milli-molar. Equations are provided to trace quantitatively the flow of the fundamental entities, namely photons and electrons, and the conversion of energy from the source, through optical components, samples and spectral-selective components, to the detectors and beyond. The equations permit facile computations of calibration curves that relate the concentrations or numbers of molecules measured to the absolute signals from the system. This methodology provides the basis for both detailed understanding and improved design of microfluidic optical analytical systems. It saves prototype turn-around time, and is much simpler and faster to use than ray tracing programs. Over two thousand spreadsheet computations were performed during this study. We found that some design variations produce higher signal levels and, for constant noise levels, lower minimum detection limits. Improvements of more than a factor of 1,000 were realized. PMID:22163573

  1. Explaining evolution via constrained persistent perfect phylogeny

    PubMed Central

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to explain efficiently data that do not conform with the classical perfect phylogeny model. PMID:25572381

  2. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion

    PubMed Central

    Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937

  3. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.

    PubMed

    Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M

    2012-01-01

    Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.

  4. The liquid⟷amorphous transition and the high pressure phase diagram of carbon

    NASA Astrophysics Data System (ADS)

    Robinson, David R.; Wilson, Mark

    2013-04-01

    The phase diagram of carbon is mapped to high pressure using a computationally-tractable potential model. The use of a relatively simple (Tersoff-II) potential model allows a large range of phase space to be explored. The coexistence (melting) curve for the diamond crystal/liquid dyad is mapped directly by modelling the solid/liquid interfaces. The melting curve is found to be re-entrant and belongs to a conformal class of diamond/liquid coexistence curves. On supercooling the liquid a phase transition to a tetrahedral amorphous form (ta-C) is observed. The liquid ⟷ amorphous coexistence curve is mapped onto the pT plane and is found to also be re-entrant. The entropy changes for both melting and the amorphous ⟶ liquid transitions are obtained from the respective coexistence curves and the associated changes in molar volume. The structural change on amorphization is analysed at different points on the coexistence curve including for transitions that are both isochoric and isocoordinate (no change in nearest-neighbour coordination number). The conformal nature of the melting curve is highlighted with respect to the known behaviour of Si. The relationship of the observed liquid/amorphous coexistence curve to the Si low- and high-density amorphous (LDA/HDA) transition is discussed.

  5. A Hybrid Vortex Sheet / Point Vortex Model for Unsteady Separated Flows

    NASA Astrophysics Data System (ADS)

    Darakananda, Darwin; Eldredge, Jeff D.; Colonius, Tim; Williams, David R.

    2015-11-01

    The control of separated flow over an airfoil is essential for obtaining lift enhancement, drag reduction, and the overall ability to perform high agility maneuvers. In order to develop reliable flight control systems capable of realizing agile maneuvers, we need a low-order aerodynamics model that can accurately predict the force response of an airfoil to arbitrary disturbances and/or actuation. In the present work, we integrate vortex sheets and variable strength point vortices into a method that is able to capture the formation of coherent vortex structures while remaining computationally tractable for control purposes. The role of the vortex sheet is limited to tracking the dynamics of the shear layer immediately behind the airfoil. When parts of the sheet develop into large scale structures, those sections are replaced by variable strength point vortices. We prevent the vortex sheets from growing indefinitely by truncating the tips of the sheets and transfering their circulation into nearby point vortices whenever the length of sheet exceeds a threshold. We demonstrate the model on a variety of canonical problems, including pitch-up and impulse translation of an airfoil at various angles of attack. Support by the U.S. Air Force Office of Scientific Research (FA9550-14-1-0328) with program manager Dr. Douglas Smith is gratefully acknowledged.

  6. Protein structure determination by exhaustive search of Protein Data Bank derived databases.

    PubMed

    Stokes-Rees, Ian; Sliz, Piotr

    2010-12-14

    Parallel sequence and structure alignment tools have become ubiquitous and invaluable at all levels in the study of biological systems. We demonstrate the application and utility of this same parallel search paradigm to the process of protein structure determination, benefitting from the large and growing corpus of known structures. Such searches were previously computationally intractable. Through the method of Wide Search Molecular Replacement, developed here, they can be completed in a few hours with the aide of national-scale federated cyberinfrastructure. By dramatically expanding the range of models considered for structure determination, we show that small (less than 12% structural coverage) and low sequence identity (less than 20% identity) template structures can be identified through multidimensional template scoring metrics and used for structure determination. Many new macromolecular complexes can benefit significantly from such a technique due to the lack of known homologous protein folds or sequences. We demonstrate the effectiveness of the method by determining the structure of a full-length p97 homologue from Trichoplusia ni. Example cases with the MHC/T-cell receptor complex and the EmoB protein provide systematic estimates of minimum sequence identity, structure coverage, and structural similarity required for this method to succeed. We describe how this structure-search approach and other novel computationally intensive workflows are made tractable through integration with the US national computational cyberinfrastructure, allowing, for example, rapid processing of the entire Structural Classification of Proteins protein fragment database.

  7. Cutoff size need not strongly influence molecular dynamics results for solvated polypeptides.

    PubMed

    Beck, David A C; Armen, Roger S; Daggett, Valerie

    2005-01-18

    The correct treatment of van der Waals and electrostatic nonbonded interactions in molecular force fields is essential for performing realistic molecular dynamics (MD) simulations of solvated polypeptides. The most computationally tractable treatment of nonbonded interactions in MD utilizes a spherical distance cutoff (typically, 8-12 A) to reduce the number of pairwise interactions. In this work, we assess three spherical atom-based cutoff approaches for use with all-atom explicit solvent MD: abrupt truncation, a CHARMM-style electrostatic shift truncation, and our own force-shifted truncation. The chosen system for this study is an end-capped 17-residue alanine-based alpha-helical peptide, selected because of its use in previous computational and experimental studies. We compare the time-averaged helical content calculated from these MD trajectories with experiment. We also examine the effect of varying the cutoff treatment and distance on energy conservation. We find that the abrupt truncation approach is pathological in its inability to conserve energy. The CHARMM-style shift truncation performs quite well but suffers from energetic instability. On the other hand, the force-shifted spherical cutoff method conserves energy, correctly predicts the experimental helical content, and shows convergence in simulation statistics as the cutoff is increased. This work demonstrates that by using proper and rigorous techniques, it is possible to correctly model polypeptide dynamics in solution with a spherical cutoff. The inherent computational advantage of spherical cutoffs over Ewald summation (and related) techniques is essential in accessing longer MD time scales.

  8. Strategies for Global Optimization of Temporal Preferences

    NASA Technical Reports Server (NTRS)

    Morris, Paul; Morris, Robert; Khatib, Lina; Ramakrishnan, Sailesh

    2004-01-01

    A temporal reasoning problem can often be naturally characterized as a collection of constraints with associated local preferences for times that make up the admissible values for those constraints. Globally preferred solutions to such problems emerge as a result of well-defined operations that compose and order temporal assignments. The overall objective of this work is a characterization of different notions of global preference, and to identify tractable sub-classes of temporal reasoning problems incorporating these notions. This paper extends previous results by refining the class of useful notions of global temporal preference that are associated with problems that admit of tractable solution techniques. This paper also answers the hitherto open question of whether problems that seek solutions that are globally preferred from a Utilitarian criterion for global preference can be found tractably.

  9. From drug to protein: using yeast genetics for high-throughput target discovery.

    PubMed

    Armour, Christopher D; Lum, Pek Yee

    2005-02-01

    The budding yeast Saccharomyces cerevisiae has long been an effective eukaryotic model system for understanding basic cellular processes. The genetic tractability and ease of manipulation in the laboratory make yeast well suited for large-scale chemical and genetic screens. Several recent studies describing the use of yeast genetics for high-throughput drug target identification are discussed in this review.

  10. Anti-Lemons: School Reputation and Educational Quality. NBER Working Paper No. 15112

    ERIC Educational Resources Information Center

    MacLeod, W. Bentley; Urquiola, Miguel

    2009-01-01

    Friedman (1962) argued that a free market in which schools compete based upon their reputation would lead to an efficient supply of educational services. This paper explores this issue by building a tractable model in which rational individuals go to school and accumulate skill valued in a perfectly competitive labor market. To this it adds one…

  11. Distributed Decision Making in a Dynamic Network Environment

    DTIC Science & Technology

    1990-01-01

    protocols, particularly when traffic arrival statistics are varying or unknown, and loads are high. Both nonpreemptive and preemptive repeat disciplines are...The simulation model allows general value functions, continuous time operation, and preemptive or nonpreemptive service. For reasons of tractability... nonpreemptive LIFO, (4) nonpreemptive LIFO with discarding, (5) nonpreemptive HOL, (6) nonpreemp- tive HOL with discarding, (7) preemptive repeat HOL, (8

  12. Bioinformatics in translational drug discovery.

    PubMed

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  13. RCK: accurate and efficient inference of sequence- and structure-based protein-RNA binding models from RNAcompete data.

    PubMed

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-06-15

    Protein-RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein-RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein-RNA structure-based models on an unprecedented scale. Software and models are freely available at http://rck.csail.mit.edu/ bab@mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  15. Phase diagram for the Winfree model of coupled nonlinear oscillators.

    PubMed

    Ariaratnam, J T; Strogatz, S H

    2001-05-07

    In 1967 Winfree proposed a mean-field model for the spontaneous synchronization of chorusing crickets, flashing fireflies, circadian pacemaker cells, or other large populations of biological oscillators. Here we give the first bifurcation analysis of the model, for a tractable special case. The system displays rich collective dynamics as a function of the coupling strength and the spread of natural frequencies. Besides incoherence, frequency locking, and oscillator death, there exist hybrid solutions that combine two or more of these states. We present the phase diagram and derive several of the stability boundaries analytically.

  16. Phase Diagram for the Winfree Model of Coupled Nonlinear Oscillators

    NASA Astrophysics Data System (ADS)

    Ariaratnam, Joel T.; Strogatz, Steven H.

    2001-05-01

    In 1967 Winfree proposed a mean-field model for the spontaneous synchronization of chorusing crickets, flashing fireflies, circadian pacemaker cells, or other large populations of biological oscillators. Here we give the first bifurcation analysis of the model, for a tractable special case. The system displays rich collective dynamics as a function of the coupling strength and the spread of natural frequencies. Besides incoherence, frequency locking, and oscillator death, there exist hybrid solutions that combine two or more of these states. We present the phase diagram and derive several of the stability boundaries analytically.

  17. A novel model to predict gas-phase hydroxyl radical oxidation kinetics of polychlorinated compounds.

    PubMed

    Luo, Shuang; Wei, Zongsu; Spinney, Richard; Yang, Zhihui; Chai, Liyuan; Xiao, Ruiyang

    2017-04-01

    In this study, a novel model based on aromatic meta-substituent grouping was presented to predict the second-order rate constants (k) for OH oxidation of PCBs in gas-phase. Since the oxidation kinetics are dependent on the chlorination degree and position, we hypothesized that it may be more accurate for k value prediction if we group PCB congeners based on substitution positions (i.e., ortho (o), meta (m), and para (p)). To test this hypothesis, we examined the correlation of polarizability (α), a quantum chemical based descriptor for k values, with an empirical Hammett constant (σ + ) on each substitution position. Our result shows that α is highly linearly correlated to ∑σ o,m,p + based on aromatic meta-substituents leading to the grouping based predictive model. With the new model, the calculated k values exhibited an excellent agreement with experimental measurements, and greater predictive power than the quantum chemical based quantitative structure activity relationship (QSAR) model. Further, the relationship of α and ∑σ o,m,p + for PCDDs congeners, together with highest occupied molecular orbital (HOMO) distribution, were used to validate the aromatic meta-substituent grouping method. This newly developed model features a combination of good predictability of quantum chemical based QSAR model and simplicity of Hammett relationship, showing a great potential for fast and computational tractable prediction of k values for gas-phase OH oxidation of polychlorinated compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Using Machine Learning in Adversarial Environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren Leon Davis

    Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approachesmore » only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.« less

  19. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  20. Efficient estimation of the maximum metabolic productivity of batch systems

    DOE PAGES

    St. John, Peter C.; Crowley, Michael F.; Bomble, Yannick J.

    2017-01-31

    Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumptionmore » that all fluxes in the cell are free to vary is a challenging numerical task. Here, previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable.« less

  1. Optimization of block-floating-point realizations for digital controllers with finite-word-length considerations.

    PubMed

    Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian

    2003-01-01

    The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.

  2. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  3. Active Ambiguity Reduction: An Experiment Design Approach to Tractable Qualitative Reasoning.

    DTIC Science & Technology

    1987-04-20

    Approach to Tractable Qualitative Reasoning Shankar A. Rajamoney t [ For Gerald F. DeJong Artificial Intelligence Research Group Coordinated Science...Representations of Knowledge in a Mechanics Problem- Solver." Proceedings of the Fifth International Joint Conference on Artificial Intelligence. Cambridge. MIA...International Joint Conference on Artificial Intelligence. Tokyo. Japan. 1979. [de Kleer84] J. de Kleer and J. S. Brown. "A Qualitative Physics Based on

  4. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    PubMed

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  5. Estimating true evolutionary distances under the DCJ model.

    PubMed

    Lin, Yu; Moret, Bernard M E

    2008-07-01

    Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.

  6. Maximum Entropy Principle for Transportation

    NASA Astrophysics Data System (ADS)

    Bilich, F.; DaSilva, R.

    2008-11-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  7. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP

    PubMed Central

    Staras, Kevin

    2016-01-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture. PMID:27760125

  8. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  9. Transient probabilities for queues with applications to hospital waiting list management.

    PubMed

    Joy, Mark; Jones, Simon

    2005-08-01

    In this paper we study queuing systems within the NHS. Recently imposed government performance targets lead NHS executives to investigate and instigate alternative management strategies, thereby imposing structural changes on the queues. Under such circumstances, it is most unlikely that such systems are in equilibrium. It is crucial, in our opinion, to recognise this state of affairs in order to make a balanced assessment of the role of queue management in the modern NHS. From a mathematical perspective it should be emphasised that measures of the state of a queue based upon the assumption of statistical equilibrium (a pervasive methodology in the study of queues) are simply wrong in the above scenario. To base strategic decisions around such ideas is therefore highly questionable and it is one of the purposes of this paper to offer alternatives: we present some (recent) research whose results generate performance measures and measures of risk, for example, of waiting-times growing unacceptably large; we emphasise that these results concern the transient behaviour of the queueing model-there is no asssumption of statistical equilibrium. We also demonstrate that our results are computationally tractable.

  10. Black holes in quasi-topological gravity and conformal couplings

    NASA Astrophysics Data System (ADS)

    Chernicoff, Mariano; Fierro, Octavio; Giribet, Gaston; Oliva, Julio

    2017-02-01

    Lovelock theory of gravity provides a tractable model to investigate the effects of higher-curvature terms in the context of AdS/CFT. Yielding second order, ghost-free field equations, this theory represents a minimal setup in which higher-order gravitational couplings in asymptotically Anti-de Sitter (AdS) spaces, including black holes, can be solved analytically. This however has an obvious limitation as in dimensions lower than seven, the contribution from cubic or higher curvature terms is merely topological. Therefore, in order to go beyond quadratic order and study higher terms in AdS5 analytically, one is compelled to look for other toy models. One such model is the so-called quasi-topological gravity, which, despite being a higher-derivative theory, provides a tractable setup with R 3 and R 4 terms. In this paper, we investigate AdS5 black holes in quasi-topological gravity. We consider the theory conformally coupled to matter and in presence of Abelian gauge fields. We show that charged black holes in AdS5 which, in addition, exhibit a backreaction of the matter fields on the geometry can be found explicitly in this theory. These solutions generalize the black hole solution of quasi-topological gravity and exist in a region of the parameter spaces consistent with the constraints coming from causality and other consistency conditions. They have finite conserved charges and exhibit non-trivial thermodynamical properties.

  11. A Scalable Computational Framework for Establishing Long-Term Behavior of Stochastic Reaction Networks

    PubMed Central

    Khammash, Mustafa

    2014-01-01

    Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed. PMID:24968191

  12. Anharmonic interatomic force constants and thermal conductivity from Grüneisen parameters: An application to graphene

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua; Gan, Chee Kwan

    2017-07-01

    Phonon-mediated thermal conductivity, which is of great technological relevance, arises due fundamentally to anharmonic scattering from interatomic potentials. Despite its prevalence, accurate first-principles calculations of thermal conductivity remain challenging, primarily due to the high computational cost of anharmonic interatomic force constant (IFC) calculations. Meanwhile, the related anharmonic phenomenon of thermal expansion is much more tractable, being computable from the Grüneisen parameters associated with phonon frequency shifts due to crystal deformations. In this work, we propose an approach for computing the largest cubic IFCs from the Grüneisen parameter data. This allows an approximate determination of the thermal conductivity via a much less expensive route. The key insight is that although the Grüneisen parameters cannot possibly contain all the information on the cubic IFCs, being derivable from spatially uniform deformations, they can still unambiguously and accurately determine the largest and most physically relevant ones. By fitting the anisotropic Grüneisen parameter data along judiciously designed deformations, we can deduce (i.e., reverse-engineer) the dominant cubic IFCs and estimate three-phonon scattering amplitudes. We illustrate our approach by explicitly computing the largest cubic IFCs and thermal conductivity of graphene, especially for its out-of-plane (flexural) modes that exhibit anomalously large anharmonic shifts and thermal conductivity contributions. Our calculations on graphene not only exhibit reasonable agreement with established density-functional theory results, but they also present a pedagogical opportunity for introducing an elegant analytic treatment of the Grüneisen parameters of generic two-band models. Our approach can be readily extended to more complicated crystalline materials with nontrivial anharmonic lattice effects.

  13. Genomic Sequence and Experimental Tractability of a New Decapod Shrimp Model, Neocaridina denticulata

    PubMed Central

    Kenny, Nathan J.; Sin, Yung Wa; Shen, Xin; Zhe, Qu; Wang, Wei; Chan, Ting Fung; Tobe, Stephen S.; Shimeld, Sebastian M.; Chu, Ka Hou; Hui, Jerome H. L.

    2014-01-01

    The speciose Crustacea is the largest subphylum of arthropods on the planet after the Insecta. To date, however, the only publically available sequenced crustacean genome is that of the water flea, Daphnia pulex, a member of the Branchiopoda. While Daphnia is a well-established ecotoxicological model, previous study showed that one-third of genes contained in its genome are lineage-specific and could not be identified in any other metazoan genomes. To better understand the genomic evolution of crustaceans and arthropods, we have sequenced the genome of a novel shrimp model, Neocaridina denticulata, and tested its experimental malleability. A library of 170-bp nominal fragment size was constructed from DNA of a starved single adult and sequenced using the Illumina HiSeq2000 platform. Core eukaryotic genes, the mitochondrial genome, developmental patterning genes (such as Hox) and microRNA processing pathway genes are all present in this animal, suggesting it has not undergone massive genomic loss. Comparison with the published genome of Daphnia pulex has allowed us to reveal 3750 genes that are indeed specific to the lineage containing malacostracans and branchiopods, rather than Daphnia-specific (E-value: 10−6). We also show the experimental tractability of N. denticulata, which, together with the genomic resources presented here, make it an ideal model for a wide range of further aquacultural, developmental, ecotoxicological, food safety, genetic, hormonal, physiological and reproductive research, allowing better understanding of the evolution of crustaceans and other arthropods. PMID:24619275

  14. Modelling Detailed-Chemistry Effects on Turbulent Diffusion Flames using a Parallel Solution-Adaptive Scheme

    NASA Astrophysics Data System (ADS)

    Jha, Pradeep Kumar

    Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.

  15. Theoretical study of the hyperfine parameters of OH

    NASA Technical Reports Server (NTRS)

    Chong, Delano P.; Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.

    1991-01-01

    In the present study of the hyperfine parameters of O-17H as a function of the one- and n-particle spaces, all of the parameters except oxygen's spin density, b sub F(O), are sufficiently easily tractable to allow concentration on the computational requirements for accurate determination of b sub F(O). Full configuration-interaction (FCI) calculations in six Gaussian basis sets yield unambiguous results for (1) the effect of uncontracting the O s and p basis sets; (2) that of adding diffuse s and p functions; and (3) that of adding polarization functions to O. The size-extensive modified coupled-pair functional method yields b sub F values which are in fair agreement with FCI results.

  16. Low-Thrust Many-Revolution Trajectory Optimization via Differential Dynamic Programming and a Sundman Transformation

    NASA Astrophysics Data System (ADS)

    Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.

    2018-01-01

    Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  18. Identifying High Potential Well Targets with 3D Seismic and Mineralogy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellors, R. J.

    2015-10-30

    Seismic reflection the primary tool used in petroleum exploration and production, but use in geothermal exploration is less standard, in part due to cost but also due to the challenges in identifying the highly-permeable zones essential for economic hydrothermal systems [e.g. Louie et al., 2011; Majer, 2003]. Newer technology, such as wireless sensors and low-cost high performance computing, has helped reduce the cost and effort needed to conduct 3D surveys. The second difficulty, identifying permeable zones, has been less tractable so far. Here we report on the use of seismic attributes from a 3D seismic survey to identify and mapmore » permeable zones in a hydrothermal area.« less

  19. Pathway Design, Engineering, and Optimization.

    PubMed

    Garcia-Ruiz, Eva; HamediRad, Mohammad; Zhao, Huimin

    The microbial metabolic versatility found in nature has inspired scientists to create microorganisms capable of producing value-added compounds. Many endeavors have been made to transfer and/or combine pathways, existing or even engineered enzymes with new function to tractable microorganisms to generate new metabolic routes for drug, biofuel, and specialty chemical production. However, the success of these pathways can be impeded by different complications from an inherent failure of the pathway to cell perturbations. Pursuing ways to overcome these shortcomings, a wide variety of strategies have been developed. This chapter will review the computational algorithms and experimental tools used to design efficient metabolic routes, and construct and optimize biochemical pathways to produce chemicals of high interest.

  20. Low-Thrust Many-Revolution Trajectory Optimization via Differential Dynamic Programming and a Sundman Transformation

    NASA Astrophysics Data System (ADS)

    Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.

    2018-06-01

    Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.

  1. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.

    PubMed

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.

  2. The geometric signature: Quantifying landslide-terrain types from digital elevation models

    USGS Publications Warehouse

    Pike, R.J.

    1988-01-01

    Topography of various types and scales can be fingerprinted by computer analysis of altitude matrices (digital elevation models, or DEMs). The critical analytic tool is the geometric signature, a set of measures that describes topographic form well enough to distinguish among geomorphically disparate landscapes. Different surficial processes create topography with diagnostic forms that are recognizable in the field. The geometric signature abstracts those forms from contour maps or their DEMs and expresses them numerically. This multivariate characterization enables once-in-tractable problems to be addressed. The measures that constitute a geometric signature express different but complementary attributes of topographic form. Most parameters used here are statistical estimates of central tendency and dispersion for five major categories of terrain geometry; altitude, altitude variance spectrum, slope between slope reversals, and slope and its curvature at fixed slope lengths. As an experimental application of geometric signatures, two mapped terrain types associated with different processes of shallow landsliding in Marin County, California, were distinguished consistently by a 17-variable description of topography from 21??21 DEMs (30-m grid spacing). The small matrix is a statistical window that can be used to scan large DEMs by computer, thus potentially automating the mapping of contrasting terrain types. The two types in Marin County host either (1) slow slides: earth flows and slump-earth flows, or (2) rapid flows: debris avalanches and debris flows. The signature approach should adapt to terrain taxonomy and mapping in other areas, where conditions differ from those in Central California. ?? 1988 International Association for Mathematical Geology.

  3. The physical hydrogeology of ore deposits

    USGS Publications Warehouse

    Ingebritsen, Steven E.; Appold, M.S.

    2012-01-01

    Hydrothermal ore deposits represent a convergence of fluid flow, thermal energy, and solute flux that is hydrogeologically unusual. From the hydrogeologic perspective, hydrothermal ore deposition represents a complex coupled-flow problem—sufficiently complex that physically rigorous description of the coupled thermal (T), hydraulic (H), mechanical (M), and chemical (C) processes (THMC modeling) continues to challenge our computational ability. Though research into these coupled behaviors has found only a limited subset to be quantitatively tractable, it has yielded valuable insights into the workings of hydrothermal systems in a wide range of geologic environments including sedimentary, metamorphic, and magmatic. Examples of these insights include the quantification of likely driving mechanisms, rates and paths of fluid flow, ore-mineral precipitation mechanisms, longevity of hydrothermal systems, mechanisms by which hydrothermal fluids acquire their temperature and composition, and the controlling influence of permeability and other rock properties on hydrothermal fluid behavior. In this communication we review some of the fundamental theory needed to characterize the physical hydrogeology of hydrothermal systems and discuss how this theory has been applied in studies of Mississippi Valley-type, tabular uranium, porphyry, epithermal, and mid-ocean ridge ore-forming systems. A key limitation in the computational state-of-the-art is the inability to describe fluid flow and transport fully in the many ore systems that show evidence of repeated shear or tensional failure with associated dynamic variations in permeability. However, we discuss global-scale compilations that suggest some numerical constraints on both mean and dynamically enhanced crustal permeability. Principles of physical hydrogeology can be powerful tools for investigating hydrothermal ore formation and are becoming increasingly accessible with ongoing advances in modeling software.

  4. Three-dimensional circulation dynamics of along-channel flow in stratified estuaries

    NASA Astrophysics Data System (ADS)

    Musiak, Jeffery Daniel

    Estuaries are vital because they are the major interface between humans and the oceans and provide valuable habitat for a wide range of organisms. Therefore it is important to model estuarine circulation to gain a better comprehension of the mechanics involved and how people effect estuaries. To this end, this dissertation combines analysis of data collected in the Columbia River estuary (CRE) with novel data processing and modeling techniques to further the understanding of estuaries that are strongly forced by riverflow and tides. The primary hypothesis tested in this work is that the three- dimensional (3-D) variability in along-channel currents in a strongly forced estuary can be largely accounted for by including the lateral variations in density and bathymetry but neglecting the secondary, or lateral, flow. Of course, the forcing must also include riverflow and oceanic tides. Incorporating this simplification and the modeling ideas put forth by others with new modeling techniques and new ideas on estuarine circulation will allow me to create a semi-analytical quasi 3-D profile model. This approach was chosen because it is of intermediate complexity to purely analytical models, that, if tractable, are too simple to be useful, and 3-D numerical models which can have excellent resolution but require large amounts of time, computer memory and computing power. Validation of the model will be accomplished using velocity and density data collected in the Columbia River Estuary and by comparison to analytical solutions. Components of the modeling developed here include: (1) development of a 1-D barotropic model for tidal wave propagation in frictionally dominated systems with strong topography. This model can have multiple tidal constituents and multiply connected channels. (2) Development and verification of a new quasi 3-D semi-analytical velocity profile model applicable to estuarine systems which are strongly forced by both oceanic tides and riverflow. This model includes diurnal and semi-diurnal tidal and non- linearly generated overtide circulation and residual circulation driven by riverflow, baroclinic forcing, surface wind stress and non-linear tidal forcing. (3) Demonstration that much of the lateral variation in along-channel currents is caused by variations in along- channel density forcing and bathymetry.

  5. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    for public release. magnitude reduction in estimator error required to make solving the exact optimal design problem tractable. Instead of using a naive...for designing a sequence of experiments uses suboptimal approaches: batch design that has no feedback, or greedy ( myopic ) design that optimally...approved for public release. Equation 1 is difficult to solve directly, but can be expressed in an equivalent form using the principle of dynamic programming

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leistedt, Boris; Hogg, David W., E-mail: boris.leistedt@nyu.edu, E-mail: david.hogg@nyu.edu

    We present a new method for inferring photometric redshifts in deep galaxy and quasar surveys, based on a data-driven model of latent spectral energy distributions (SEDs) and a physical model of photometric fluxes as a function of redshift. This conceptually novel approach combines the advantages of both machine learning methods and template fitting methods by building template SEDs directly from the spectroscopic training data. This is made computationally tractable with Gaussian processes operating in flux–redshift space, encoding the physics of redshifts and the projection of galaxy SEDs onto photometric bandpasses. This method alleviates the need to acquire representative training datamore » or to construct detailed galaxy SED models; it requires only that the photometric bandpasses and calibrations be known or have parameterized unknowns. The training data can consist of a combination of spectroscopic and deep many-band photometric data with reliable redshifts, which do not need to entirely spatially overlap with the target survey of interest or even involve the same photometric bands. We showcase the method on the i -magnitude-selected, spectroscopically confirmed galaxies in the COSMOS field. The model is trained on the deepest bands (from SUBARU and HST ) and photometric redshifts are derived using the shallower SDSS optical bands only. We demonstrate that we obtain accurate redshift point estimates and probability distributions despite the training and target sets having very different redshift distributions, noise properties, and even photometric bands. Our model can also be used to predict missing photometric fluxes or to simulate populations of galaxies with realistic fluxes and redshifts, for example.« less

  7. A modified Wright-Fisher model that incorporates Ne: A variant of the standard model with increased biological realism and reduced computational complexity.

    PubMed

    Zhao, Lei; Gossmann, Toni I; Waxman, David

    2016-03-21

    The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have achieved good accuracy in all cases considered. In summary, the present work extends the realism and tractability of an important model of evolutionary biology and population genetics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative Framework.

    PubMed

    Gershman, Samuel J; Daw, Nathaniel D

    2017-01-03

    We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, theories of RL have largely involved procedural and semantic memory, the way in which knowledge about action values or world models extracted gradually from many experiences can drive choice. This focus on semantic memory leaves out many aspects of memory, such as episodic memory, related to the traces of individual events. We suggest that these two challenges are related. The computational challenge can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (a) efficiently approximate value functions over complex state spaces, (b) learn with very little data, and (c) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system.

  9. Modeling of pulsating heat pipes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Givler, Richard C.; Martinez, Mario J.

    This report summarizes the results of a computer model that describes the behavior of pulsating heat pipes (PHP). The purpose of the project was to develop a highly efficient (as compared to the heat transfer capability of solid copper) thermal groundplane (TGP) using silicon carbide (SiC) as the substrate material and water as the working fluid. The objective of this project is to develop a multi-physics model for this complex phenomenon to assist with an understanding of how PHPs operate and to be able to understand how various parameters (geometry, fill ratio, materials, working fluid, etc.) affect its performance. Themore » physical processes describing a PHP are highly coupled. Understanding its operation is further complicated by the non-equilibrium nature of the interplay between evaporation/condensation, bubble growth and collapse or coalescence, and the coupled response of the multiphase fluid dynamics among the different channels. A comprehensive theory of operation and design tools for PHPs is still an unrealized task. In the following we first analyze, in some detail, a simple model that has been proposed to describe PHP behavior. Although it includes fundamental features of a PHP, it also makes some assumptions to keep the model tractable. In an effort to improve on current modeling practice, we constructed a model for a PHP using some unique features available in FLOW-3D, version 9.2-3 (Flow Science, 2007). We believe that this flow modeling software retains more of the salient features of a PHP and thus, provides a closer representation of its behavior.« less

  10. Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Weaver, Jesse R.

    In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less

  11. Relay-aided free-space optical communications using α - μ distribution over atmospheric turbulence channels with misalignment errors

    NASA Astrophysics Data System (ADS)

    Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.

    2018-06-01

    In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.

  12. Normalized Shape and Location of Perturbed Craniofacial Structures in the Xenopus Tadpole Reveal an Innate Ability to Achieve Correct Morphology

    PubMed Central

    Vandenberg, Laura N.; Adams, Dany S.; Levin, Michael

    2012-01-01

    Background Embryonic development can often adjust its morphogenetic processes to counteract external perturbation. The existence of self-monitoring responses during pattern formation is of considerable importance to the biomedicine of birth defects, but has not been quantitatively addressed. To understand the computational capabilities of biological tissues in a molecularly-tractable model system, we induced craniofacial defects in Xenopus embryos, then tracked tadpoles with craniofacial deformities and used geometric morphometric techniques to characterize changes in the shape and position of the craniofacial structures. Results Canonical variate analysis revealed that the shapes and relative positions of perturbed jaws and branchial arches were corrected during the first few months of tadpole development. Analysis of the relative movements of the anterior-most structures indicates that misplaced structures move along the anterior-posterior and left-right axes in ways that are significantly different from their normal movements. Conclusions Our data suggest a model in which craniofacial structures utilize a measuring mechanism to assess and adjust their location relative to other local organs. Understanding the correction mechanisms at work in this system could lead to the better understanding of the adaptive decision-making capabilities of living tissues and suggest new approaches to correct birth defects in humans. PMID:22411736

  13. Functional identification of spike-processing neural circuits.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2014-02-01

    We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.

  14. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  15. Joint resonant CMB power spectrum and bispectrum estimation

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Münchmeyer, Moritz; Wandelt, Benjamin

    2016-02-01

    We develop the tools necessary to assess the statistical significance of resonant features in the CMB correlation functions, combining power spectrum and bispectrum measurements. This significance is typically addressed by running a large number of simulations to derive the probability density function (PDF) of the feature-amplitude in the Gaussian case. Although these simulations are tractable for the power spectrum, for the bispectrum they require significant computational resources. We show that, by assuming that the PDF is given by a multivariate Gaussian where the covariance is determined by the Fisher matrix of the sine and cosine terms, we can efficiently produce spectra that are statistically close to those derived from full simulations. By drawing a large number of spectra from this PDF, both for the power spectrum and the bispectrum, we can quickly determine the statistical significance of candidate signatures in the CMB, considering both single frequency and multifrequency estimators. We show that for resonance models, cosmology and foreground parameters have little influence on the estimated amplitude, which allows us to simplify the analysis considerably. A more precise likelihood treatment can then be applied to candidate signatures only. We also discuss a modal expansion approach for the power spectrum, aimed at quickly scanning through large families of oscillating models.

  16. Novel Gbeta Mimic Kelch Proteins (Gpb1 and Gpb2 Connect G-Protein Signaling to Ras via Yeast Neurofibromin Homologs Ira1 and Ira2: A Model for Human NF1

    DTIC Science & Technology

    2008-03-01

    tractable fungal model system, Cryptococcus neoformans, and identified two kelch repeat homologs that are involved in mating (Kem1 and Kem2). To...find kelch-repeat proteins involved in G protein signaling, Cryptococcus homologues of Gpb1/2, which interacts with and negatively regulates the G...protein alpha subunit, Gpa2, in S. cerevisiae, were searched by BLAST (tblastn) in Cryptococcus genome database of serotype A (Duke University Medical

  17. Adaptive finite volume methods with well-balanced Riemann solvers for modeling floods in rugged terrain: Application to the Malpasset dam-break flood (France, 1959)

    USGS Publications Warehouse

    George, D.L.

    2011-01-01

    The simulation of advancing flood waves over rugged topography, by solving the shallow-water equations with well-balanced high-resolution finite volume methods and block-structured dynamic adaptive mesh refinement (AMR), is described and validated in this paper. The efficiency of block-structured AMR makes large-scale problems tractable, and allows the use of accurate and stable methods developed for solving general hyperbolic problems on quadrilateral grids. Features indicative of flooding in rugged terrain, such as advancing wet-dry fronts and non-stationary steady states due to balanced source terms from variable topography, present unique challenges and require modifications such as special Riemann solvers. A well-balanced Riemann solver for inundation and general (non-stationary) flow over topography is tested in this context. The difficulties of modeling floods in rugged terrain, and the rationale for and efficacy of using AMR and well-balanced methods, are presented. The algorithms are validated by simulating the Malpasset dam-break flood (France, 1959), which has served as a benchmark problem previously. Historical field data, laboratory model data and other numerical simulation results (computed on static fitted meshes) are shown for comparison. The methods are implemented in GEOCLAW, a subset of the open-source CLAWPACK software. All the software is freely available at. Published in 2010 by John Wiley & Sons, Ltd.

  18. Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments

    NASA Astrophysics Data System (ADS)

    Atwal, Gurinder S.; Kinney, Justin B.

    2016-03-01

    A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.

  19. Episodic Tremor and Slip (ETS) as a chaotic multiphysics spring

    NASA Astrophysics Data System (ADS)

    Veveakis, E.; Alevizos, S.; Poulet, T.

    2017-03-01

    Episodic Tremor and Slip (ETS) events display a rich behaviour of slow and accelerated slip with simple oscillatory to complicated chaotic time series. It is commonly believed that the fast events appearing as non volcanic tremors are signatures of deep fluid injection. The fluid source is suggested to be related to the breakdown of hydrous phyllosilicates, mainly the serpentinite group minerals such as antigorite or lizardite that are widespread in the top of the slab in subduction environments. Similar ETS sequences are recorded in different lithologies in exhumed crustal carbonate-rich thrusts where the fluid source is suggested to be the more vigorous carbonate decomposition reaction. If indeed both types of events can be understood and modelled by the same generic fluid release reaction AB(solid) ⇌A(solid) +B(fluid) , the data from ETS sequences in subduction zones reveal a geophysically tractable temporal evolution with no access to the fault zone. This work reviews recent advances in modelling ETS events considering the multiphysics instabilities triggered by the fluid release reaction and develops a thermal-hydraulic-mechanical-chemical oscillator (THMC spring) model for such mineral reactions (like dehydration and decomposition) in Megathrusts. We describe advanced computational methods for THMC instabilities and discuss spectral element and finite element solutions. We apply the presented numerical methods to field examples of this important mechanism and reproduce the temporal signature of the Cascadia and Hikurangi trench with a serpentinite oscillator.

  20. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1994-01-01

    The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.

  1. Carbon cycle confidence and uncertainty: Exploring variation among soil biogeochemical models

    DOE PAGES

    Wieder, William R.; Hartman, Melannie D.; Sulman, Benjamin N.; ...

    2017-11-09

    Emerging insights into factors responsible for soil organic matter stabilization and decomposition are being applied in a variety of contexts, but new tools are needed to facilitate the understanding, evaluation, and improvement of soil biogeochemical theory and models at regional to global scales. To isolate the effects of model structural uncertainty on the global distribution of soil carbon stocks and turnover times we developed a soil biogeochemical testbed that forces three different soil models with consistent climate and plant productivity inputs. The models tested here include a first-order, microbial implicit approach (CASA-CNP), and two recently developed microbially explicit models thatmore » can be run at global scales (MIMICS and CORPSE). When forced with common environmental drivers, the soil models generated similar estimates of initial soil carbon stocks (roughly 1,400 Pg C globally, 0–100 cm), but each model shows a different functional relationship between mean annual temperature and inferred turnover times. Subsequently, the models made divergent projections about the fate of these soil carbon stocks over the 20th century, with models either gaining or losing over 20 Pg C globally between 1901 and 2010. Single-forcing experiments with changed inputs, tem- perature, and moisture suggest that uncertainty associated with freeze-thaw processes as well as soil textural effects on soil carbon stabilization were larger than direct temper- ature uncertainties among models. Finally, the models generated distinct projections about the timing and magnitude of seasonal heterotrophic respiration rates, again reflecting structural uncertainties that were related to environmental sensitivities and assumptions about physicochemical stabilization of soil organic matter. Here, by providing a computationally tractable and numerically consistent framework to evaluate models we aim to better understand uncertainties among models and generate insights about fac- tors regulating the turnover of soil organic matter.« less

  2. Carbon cycle confidence and uncertainty: Exploring variation among soil biogeochemical models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieder, William R.; Hartman, Melannie D.; Sulman, Benjamin N.

    Emerging insights into factors responsible for soil organic matter stabilization and decomposition are being applied in a variety of contexts, but new tools are needed to facilitate the understanding, evaluation, and improvement of soil biogeochemical theory and models at regional to global scales. To isolate the effects of model structural uncertainty on the global distribution of soil carbon stocks and turnover times we developed a soil biogeochemical testbed that forces three different soil models with consistent climate and plant productivity inputs. The models tested here include a first-order, microbial implicit approach (CASA-CNP), and two recently developed microbially explicit models thatmore » can be run at global scales (MIMICS and CORPSE). When forced with common environmental drivers, the soil models generated similar estimates of initial soil carbon stocks (roughly 1,400 Pg C globally, 0–100 cm), but each model shows a different functional relationship between mean annual temperature and inferred turnover times. Subsequently, the models made divergent projections about the fate of these soil carbon stocks over the 20th century, with models either gaining or losing over 20 Pg C globally between 1901 and 2010. Single-forcing experiments with changed inputs, tem- perature, and moisture suggest that uncertainty associated with freeze-thaw processes as well as soil textural effects on soil carbon stabilization were larger than direct temper- ature uncertainties among models. Finally, the models generated distinct projections about the timing and magnitude of seasonal heterotrophic respiration rates, again reflecting structural uncertainties that were related to environmental sensitivities and assumptions about physicochemical stabilization of soil organic matter. Here, by providing a computationally tractable and numerically consistent framework to evaluate models we aim to better understand uncertainties among models and generate insights about fac- tors regulating the turnover of soil organic matter.« less

  3. Turning intractable counting into sampling: Computing the configurational entropy of three-dimensional jammed packings.

    PubMed

    Martiniani, Stefano; Schrenk, K Julian; Stevenson, Jacob D; Wales, David J; Frenkel, Daan

    2016-01-01

    We present a numerical calculation of the total number of disordered jammed configurations Ω of N repulsive, three-dimensional spheres in a fixed volume V. To make these calculations tractable, we increase the computational efficiency of the approach of Xu et al. [Phys. Rev. Lett. 106, 245502 (2011)10.1103/PhysRevLett.106.245502] and Asenjo et al. [Phys. Rev. Lett. 112, 098002 (2014)10.1103/PhysRevLett.112.098002] and we extend the method to allow computation of the configurational entropy as a function of pressure. The approach that we use computes the configurational entropy by sampling the absolute volume of basins of attraction of the stable packings in the potential energy landscape. We find a surprisingly strong correlation between the pressure of a configuration and the volume of its basin of attraction in the potential energy landscape. This relation is well described by a power law. Our methodology to compute the number of minima in the potential energy landscape should be applicable to a wide range of other enumeration problems in statistical physics, string theory, cosmology, and machine learning that aim to find the distribution of the extrema of a scalar cost function that depends on many degrees of freedom.

  4. Complex computation in the retina

    NASA Astrophysics Data System (ADS)

    Deshmukh, Nikhil Rajiv

    Elucidating the general principles of computation in neural circuits is a difficult problem requiring both a tractable model circuit as well as sophisticated measurement tools. This thesis advances our understanding of complex computation in the salamander retina and its underlying circuitry and furthers the development of advanced tools to enable detailed study of neural circuits. The retina provides an ideal model system for neural circuits in general because it is capable of producing complex representations of the visual scene, and both its inputs and outputs are accessible to the experimenter. Chapter 2 describes the biophysical mechanisms that give rise to the omitted stimulus response in retinal ganglion cells described in Schwartz et al., (2007) and Schwartz and Berry, (2008). The extra response to omitted flashes is generated at the input to bipolar cells, and is separable from the characteristic latency shift of the OSR apparent in ganglion cells, which must occur downstream in the circuit. Chapter 3 characterizes the nonlinearities at the first synapse of the ON pathway in response to high contrast flashes and develops a phenomenological model that captures the effect of synaptic activation and intracellular signaling dynamics on flash responses. This work is the first attempt to model the dynamics of the poorly characterized mGluR6 transduction cascade unique to ON bipolar cells, and explains the second lobe of the biphasic flash response. Complementary to the study of neural circuits, recent advances in wafer-scale photolithography have made possible new devices to measure the electrical and mechanical properties of neurons. Chapter 4 reports a novel piezoelectric sensor that facilitates the simultaneous measurement of electrical and mechanical signals in neural tissue. This technology could reveal the relationship between the electrical activity of neurons and their local mechanical environment, which is critical to the study of mechanoreceptors, neural development, and traumatic brain injury. Chapter 5 describes advances in the development, fabrication, and testing of a prototype silicon micropipette for patch clamp physiology. Nanoscale photolithography addresses some of the limitations of traditional glass patch electrodes, such as the rapid dialysis of the cell with internal solution, and provides a platform for integration of microfluidics and electronics into the device, which can enable novel experimental methodology.

  5. Multiscale climate emulator of multimodal wave spectra: MUSCLE-spectra

    NASA Astrophysics Data System (ADS)

    Rueda, Ana; Hegermiller, Christie A.; Antolinez, Jose A. A.; Camus, Paula; Vitousek, Sean; Ruggiero, Peter; Barnard, Patrick L.; Erikson, Li H.; Tomás, Antonio; Mendez, Fernando J.

    2017-02-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this complex problem tractable using computationally expensive numerical models. So far, the skill of statistical-downscaling model-based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  6. Multiscale Climate Emulator of Multimodal Wave Spectra: MUSCLE-spectra

    NASA Astrophysics Data System (ADS)

    Rueda, A.; Hegermiller, C.; Alvarez Antolinez, J. A.; Camus, P.; Vitousek, S.; Ruggiero, P.; Barnard, P.; Erikson, L. H.; Tomas, A.; Mendez, F. J.

    2016-12-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this problem complex yet tractable using computationally-expensive numerical models. So far, the skill of statistical-downscaling models based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical-downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the Southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  7. Probabilistic Design of a Wind Tunnel Model to Match the Response of a Full-Scale Aircraft

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Stroud, W. Jefferson; Krishnamurthy, T.; Spain, Charles V.; Naser, Ahmad S.

    2005-01-01

    approach is presented for carrying out the reliability-based design of a plate-like wing that is part of a wind tunnel model. The goal is to design the wind tunnel model to match the stiffness characteristics of the wing box of a flight vehicle while satisfying strength-based risk/reliability requirements that prevents damage to the wind tunnel model and fixtures. The flight vehicle is a modified F/A-18 aircraft. The design problem is solved using reliability-based optimization techniques. The objective function to be minimized is the difference between the displacements of the wind tunnel model and the corresponding displacements of the flight vehicle. The design variables control the thickness distribution of the wind tunnel model. Displacements of the wind tunnel model change with the thickness distribution, while displacements of the flight vehicle are a set of fixed data. The only constraint imposed is that the probability of failure is less than a specified value. Failure is assumed to occur if the stress caused by aerodynamic pressure loading is greater than the specified strength allowable. Two uncertain quantities are considered: the allowable stress and the thickness distribution of the wind tunnel model. Reliability is calculated using Monte Carlo simulation with response surfaces that provide approximate values of stresses. The response surface equations are, in turn, computed from finite element analyses of the wind tunnel model at specified design points. Because the response surface approximations were fit over a small region centered about the current design, the response surfaces were refit periodically as the design variables changed. Coarse-grained parallelism was used to simultaneously perform multiple finite element analyses. Studies carried out in this paper demonstrate that this scheme of using moving response surfaces and coarse-grained computational parallelism reduce the execution time of the Monte Carlo simulation enough to make the design problem tractable. The results of the reliability-based designs performed in this paper show that large decreases in the probability of stress-based failure can be realized with only small sacrifices in the ability of the wind tunnel model to represent the displacements of the full-scale vehicle.

  8. A Genetically Engineered Mouse Model of Neuroblastoma Driven by Mutated ALK and MYCN

    DTIC Science & Technology

    2015-09-01

    Through publications in scientific journals (see Journal publications). What do you plan to do during the next reporting period to accomplish the goals...We plan to elucidate the mechanism of action of the synergy between ALK and CDK inhibitors, validate CDK7 inhibition as a tractable therapeutic...delays and actions or plans to resolve them Nothing to Report Changes that had a significant impact on expenditures Nothing to Report Significant

  9. A Bayesian modification to the Jelinski-Moranda software reliability growth model

    NASA Technical Reports Server (NTRS)

    Littlewood, B.; Sofer, A.

    1983-01-01

    The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.

  10. Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, Sasha C.; Yang, Xiaojuan; Thornton, Peter E.

    2015-06-25

    Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO 2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate-carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate-carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling effortsmore » and provide suggestions for how to move forward.« less

  11. Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor

    USGS Publications Warehouse

    Reed, Sasha C.; Yang, Xiaojuan; Thornton, Peter E.

    2015-01-01

    Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate–carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate–carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling efforts and provide suggestions for how to move forward.

  12. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  13. Membrane-spanning α-helical barrels as tractable protein-design targets.

    PubMed

    Niitsu, Ai; Heal, Jack W; Fauland, Kerstin; Thomson, Andrew R; Woolfson, Derek N

    2017-08-05

    The rational ( de novo ) design of membrane-spanning proteins lags behind that for water-soluble globular proteins. This is due to gaps in our knowledge of membrane-protein structure, and experimental difficulties in studying such proteins compared to water-soluble counterparts. One limiting factor is the small number of experimentally determined three-dimensional structures for transmembrane proteins. By contrast, many tens of thousands of globular protein structures provide a rich source of 'scaffolds' for protein design, and the means to garner sequence-to-structure relationships to guide the design process. The α-helical coiled coil is a protein-structure element found in both globular and membrane proteins, where it cements a variety of helix-helix interactions and helical bundles. Our deep understanding of coiled coils has enabled a large number of successful de novo designs. For one class, the α-helical barrels-that is, symmetric bundles of five or more helices with central accessible channels-there are both water-soluble and membrane-spanning examples. Recent computational designs of water-soluble α-helical barrels with five to seven helices have advanced the design field considerably. Here we identify and classify analogous and more complicated membrane-spanning α-helical barrels from the Protein Data Bank. These provide tantalizing but tractable targets for protein engineering and de novo protein design.This article is part of the themed issue 'Membrane pores: from structure and assembly, to medicine and technology'. © 2017 The Author(s).

  14. Improved Hybrid Modeling of Spent Fuel Storage Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bibber, Karl van

    This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less

  15. In defense of compilation: A response to Davis' form and content in model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard

    1990-01-01

    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.

  16. Maximum entropy principal for transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilich, F.; Da Silva, R.

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less

  17. Predictions of psychophysical measurements for sinusoidal amplitude modulated (SAM) pulse-train stimuli from a stochastic model.

    PubMed

    Xu, Yifang; Collins, Leslie M

    2007-08-01

    Two approaches have been proposed to reduce the synchrony of the neural response to electrical stimuli in cochlear implants. One approach involves adding noise to the pulse-train stimulus, and the other is based on using a high-rate pulse-train carrier. Hypotheses regarding the efficacy of the two approaches can be tested using computational models of neural responsiveness prior to time-intensive psychophysical studies. In our previous work, we have used such models to examine the effects of noise on several psychophysical measures important to speech recognition. However, to date there has been no parallel analytic solution investigating the neural response to the high-rate pulse-train stimuli and their effect on psychophysical measures. This work investigates the properties of the neural response to high-rate pulse-train stimuli with amplitude modulated envelopes using a stochastic auditory nerve model. The statistics governing the neural response to each pulse are derived using a recursive method. The agreement between the theoretical predictions and model simulations is demonstrated for sinusoidal amplitude modulated (SAM) high rate pulse-train stimuli. With our approach, predicting the neural response in modern implant devices becomes tractable. Psychophysical measurements are also predicted using the stochastic auditory nerve model for SAM high-rate pulse-train stimuli. Changes in dynamic range (DR) and intensity discrimination are compared with that observed for noise-modulated pulse-train stimuli. Modulation frequency discrimination is also studied as a function of stimulus level and pulse rate. Results suggest that high rate carriers may positively impact such psychophysical measures.

  18. Description and validation of the Simple, Efficient, Dynamic, Global, Ecological Simulator (SEDGES v.1.0)

    NASA Astrophysics Data System (ADS)

    Paiewonsky, Pablo; Elison Timm, Oliver

    2018-03-01

    In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.

  19. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data

    PubMed Central

    Gritsenko, Alexey A.; Hulsman, Marc; Reinders, Marcel J. T.; de Ridder, Dick

    2015-01-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates. PMID:26275099

  20. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.

    PubMed

    Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick

    2015-08-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.

  1. Waveform Design for Wireless Power Transfer

    NASA Astrophysics Data System (ADS)

    Clerckx, Bruno; Bayguzina, Ekaterina

    2016-12-01

    Far-field Wireless Power Transfer (WPT) has attracted significant attention in recent years. Despite the rapid progress, the emphasis of the research community in the last decade has remained largely concentrated on improving the design of energy harvester (so-called rectenna) and has left aside the effect of transmitter design. In this paper, we study the design of transmit waveform so as to enhance the DC power at the output of the rectenna. We derive a tractable model of the non-linearity of the rectenna and compare with a linear model conventionally used in the literature. We then use those models to design novel multisine waveforms that are adaptive to the channel state information (CSI). Interestingly, while the linear model favours narrowband transmission with all the power allocated to a single frequency, the non-linear model favours a power allocation over multiple frequencies. Through realistic simulations, waveforms designed based on the non-linear model are shown to provide significant gains (in terms of harvested DC power) over those designed based on the linear model and over non-adaptive waveforms. We also compute analytically the theoretical scaling laws of the harvested energy for various waveforms as a function of the number of sinewaves and transmit antennas. Those scaling laws highlight the benefits of CSI knowledge at the transmitter in WPT and of a WPT design based on a non-linear rectenna model over a linear model. Results also motivate the study of a promising architecture relying on large-scale multisine multi-antenna waveforms for WPT. As a final note, results stress the importance of modeling and accounting for the non-linearity of the rectenna in any system design involving wireless power.

  2. Emulation of recharge and evapotranspiration processes in shallow groundwater systems

    NASA Astrophysics Data System (ADS)

    Doble, Rebecca C.; Pickett, Trevor; Crosbie, Russell S.; Morgan, Leanne K.; Turnadge, Chris; Davies, Phil J.

    2017-12-01

    In shallow groundwater systems, recharge and evapotranspiration are highly sensitive to changes in the depth to water table. To effectively model these fluxes, complex functions that include soil and vegetation properties are often required. Model emulation (surrogate modelling or meta-modelling) can provide a means of incorporating detailed conceptualisation of recharge and evapotranspiration processes, while maintaining the numerical tractability and computational performance required for regional scale groundwater models and uncertainty analysis. A method for emulating recharge and evapotranspiration processes in groundwater flow models was developed, and applied to the South East region of South Australia and western Victoria, which is characterised by shallow groundwater, wetlands and coastal lakes. The soil-vegetation-atmosphere transfer (SVAT) model WAVES was used to generate relationships between net recharge (diffuse recharge minus evapotranspiration from groundwater) and depth to water table for different combinations of climate, soil and land cover types. These relationships, which mimicked previously described soil, vegetation and groundwater behaviour, were combined into a net recharge lookup table. The segmented evapotranspiration package in MODFLOW was adapted to select values of net recharge from the lookup table depending on groundwater depth, and the climate, soil and land use characteristics of each cell. The model was found to be numerically robust in steady state testing, had no major increase in run time, and would be more efficient than tightly-coupled modelling approaches. It made reasonable predictions of net recharge and groundwater head compared with remotely sensed estimates of net recharge and a standard MODFLOW comparison model. In particular, the method was better able to predict net recharge and groundwater head in areas with steep hydraulic gradients.

  3. Chemically frozen multicomponent boundary layer theory of salt and/or ash deposition rates from combustion gases

    NASA Technical Reports Server (NTRS)

    Rosner, D. E.; Chen, B.-K.; Fryburg, G. C.; Kohl, F. J.

    1979-01-01

    There is increased interest in, and concern about, deposition and corrosion phenomena in combustion systems containing inorganic condensible vapors and particles (salts, ash). To meet the need for a computationally tractable deposition rate theory general enough to embrace multielement/component situations of current and future gas turbine and magnetogasdynamic interest, a multicomponent chemically 'frozen' boundary layer (CFBL) deposition theory is presented and its applicability to the special case of Na2SO4 deposition from seeded laboratory burner combustion products is demonstrated. The coupled effects of Fick (concentration) diffusion and Soret (thermal) diffusion are included, along with explicit corrections for effects of variable properties and free stream turbulence. The present formulation is sufficiently general to include the transport of particles provided they are small enough to be formally treated as heavy molecules. Quantitative criteria developed to delineate the domain of validity of CFBL-rate theory suggest considerable practical promise for the present framework, which is characterized by relatively modest demands for new input information and computer time.

  4. Assessing the role of mini-applications in predicting key performance characteristics of scientific and engineering applications

    DOE PAGES

    Barrett, R. F.; Crozier, P. S.; Doerfler, D. W.; ...

    2014-09-28

    Computational science and engineering application programs are typically large, complex, and dynamic, and are often constrained by distribution limitations. As a means of making tractable rapid explorations of scientific and engineering application programs in the context of new, emerging, and future computing architectures, a suite of miniapps has been created to serve as proxies for full scale applications. Each miniapp is designed to represent a key performance characteristic that does or is expected to significantly impact the runtime performance of an application program. In this paper we introduce a methodology for assessing the ability of these miniapps to effectively representmore » these performance issues. We applied this methodology to four miniapps, examining the linkage between them and an application they are intended to represent. Herein we evaluate the fidelity of that linkage. This work represents the initial steps required to begin to answer the question, ''Under what conditions does a miniapp represent a key performance characteristic in a full app?''« less

  5. General-purpose abductive algorithm for interpretation

    NASA Astrophysics Data System (ADS)

    Fox, Richard K.; Hartigan, Julie

    1996-11-01

    Abduction, inference to the best explanation, is an information-processing task that is useful for solving interpretation problems such as diagnosis, medical test analysis, legal reasoning, theory evaluation, and perception. The task is a generative one in which an explanation comprising of domain hypotheses is assembled and used to account for given findings. The explanation is taken to be an interpretation as to why the findings have arisen within the given situation. Research in abduction has led to the development of a general-purpose computational strategy which has been demonstrated on all of the above types of problems. This abduction strategy can be performed in layers so that different types of knowledge can come together in deriving an explanation at different levels of description. Further, the abduction strategy is tractable and offers a very useful tradeoff between confidence in the explanation and completeness of the explanation. This paper will describe this computational strategy for abduction and demonstrate its usefulness towards perceptual problems by examining problem-solving systems in speech recognition and natural language understanding.

  6. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  7. Empirical analysis and modeling of manual turnpike tollbooths in China

    NASA Astrophysics Data System (ADS)

    Zhang, Hao

    2017-03-01

    To deal with low-level of service satisfaction at tollbooths of many turnpikes in China, we conduct an empirical study and use a queueing model to investigate performance measures. In this paper, we collect archived data from six tollbooths of a turnpike in China. Empirical analysis on vehicle's time-dependent arrival process and collector's time-dependent service time is conducted. It shows that the vehicle arrival process follows a non-homogeneous Poisson process while the collector service time follows a log-normal distribution. Further, we model the process of collecting tolls at tollbooths with MAP / PH / 1 / FCFS queue for mathematical tractability and present some numerical examples.

  8. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species With Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2013-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for nearequilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion of the heating and is then compared to the total heating measured in flight.

  9. Integrating planning and reactive control

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stanley J.; Kaelbling, Leslie Pack

    1989-01-01

    Artificial intelligence research on planning is concerned with designing control systems that choose actions by manipulating explicit descriptions of the world state, the goal to be achieved, and the effects of elementary operations available to the system. Because planning shifts much of the burden of reasoning to the machine, it holds great appeal as a high-level programming method. Experience shows, however, that it cannot be used indiscriminately because even moderately rich languages for describing goals, states, and the elementary operators lead to computational inefficiencies that render the approach unsuitable for realistic applications. This inadequacy has spawned a recent wave of research on reactive control or situated activity in which control systems are modeled as reacting directly to the current situation rather than as reasoning about the future effects of alternative action sequences. While this research has confronted the issue of run-time tractability head on, in many cases it has done so by sacrificing the advantages of declarative planning techniques. Ways in which the two approaches can be unified are discussed. The authors begin by modeling reactive control systems as state machines that map a stream of sensory inputs to a stream of control outputs. These machines can be decomposed into two continuously active subsystems: the planner and the execution module. The planner computes a plan, which can be seen as a set of bits that control the behavior of the execution module. An important element of this work is the formulation of a precise semantic interpretation for the inputs and outputs of the planning system. They show that the distinction between planned and reactive behavior is largely in the eye of the beholder: systems that seem to compute explicit plans can be redescribed in situation-action terms and vice versa. They also discuss practical programming techniques that allow the advantages of declarative programming and guaranteed reactive response to be achieved simultaneously.

  10. Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.

    PubMed

    Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung

    2017-01-01

    The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient.

  11. Using Genetic Buffering Relationships Identified in Fission Yeast to Elucidate the Molecular Pathology of Tuberous Sclerosis

    DTIC Science & Technology

    2015-07-01

    AWARD NUMBER: W81XWH-14-1-0169 TITLE: Using Genetic Buffering Relationships Identified in Fission Yeast to Elucidate the Molecular Pathology of...DATES COVERED 1 July 2014 - 30 June 2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Using Genetic Buffering Relationships Identified in Fission Yeast ...SUPPLEMENTARY NOTES 14. ABSTRACT Using the genetically tractable fission yeast as a model, we sought to exploit recent advances in gene interaction

  12. FOLD-EM: automated fold recognition in medium- and low-resolution (4-15 Å) electron density maps.

    PubMed

    Saha, Mitul; Morais, Marc C

    2012-12-15

    Owing to the size and complexity of large multi-component biological assemblies, the most tractable approach to determining their atomic structure is often to fit high-resolution radiographic or nuclear magnetic resonance structures of isolated components into lower resolution electron density maps of the larger assembly obtained using cryo-electron microscopy (cryo-EM). This hybrid approach to structure determination requires that an atomic resolution structure of each component, or a suitable homolog, is available. If neither is available, then the amount of structural information regarding that component is limited by the resolution of the cryo-EM map. However, even if a suitable homolog cannot be identified using sequence analysis, a search for structural homologs should still be performed because structural homology often persists throughout evolution even when sequence homology is undetectable, As macromolecules can often be described as a collection of independently folded domains, one way of searching for structural homologs would be to systematically fit representative domain structures from a protein domain database into the medium/low resolution cryo-EM map and return the best fits. Taken together, the best fitting non-overlapping structures would constitute a 'mosaic' backbone model of the assembly that could aid map interpretation and illuminate biological function. Using the computational principles of the Scale-Invariant Feature Transform (SIFT), we have developed FOLD-EM-a computational tool that can identify folded macromolecular domains in medium to low resolution (4-15 Å) electron density maps and return a model of the constituent polypeptides in a fully automated fashion. As a by-product, FOLD-EM can also do flexible multi-domain fitting that may provide insight into conformational changes that occur in macromolecular assemblies.

  13. Moving domain computational fluid dynamics to interface with an embryonic model of cardiac morphogenesis.

    PubMed

    Lee, Juhyun; Moghadam, Mahdi Esmaily; Kung, Ethan; Cao, Hung; Beebe, Tyler; Miller, Yury; Roman, Beth L; Lien, Ching-Ling; Chi, Neil C; Marsden, Alison L; Hsiai, Tzung K

    2013-01-01

    Peristaltic contraction of the embryonic heart tube produces time- and spatial-varying wall shear stress (WSS) and pressure gradients (∇P) across the atrioventricular (AV) canal. Zebrafish (Danio rerio) are a genetically tractable system to investigate cardiac morphogenesis. The use of Tg(fli1a:EGFP) (y1) transgenic embryos allowed for delineation and two-dimensional reconstruction of the endocardium. This time-varying wall motion was then prescribed in a two-dimensional moving domain computational fluid dynamics (CFD) model, providing new insights into spatial and temporal variations in WSS and ∇P during cardiac development. The CFD simulations were validated with particle image velocimetry (PIV) across the atrioventricular (AV) canal, revealing an increase in both velocities and heart rates, but a decrease in the duration of atrial systole from early to later stages. At 20-30 hours post fertilization (hpf), simulation results revealed bidirectional WSS across the AV canal in the heart tube in response to peristaltic motion of the wall. At 40-50 hpf, the tube structure undergoes cardiac looping, accompanied by a nearly 3-fold increase in WSS magnitude. At 110-120 hpf, distinct AV valve, atrium, ventricle, and bulbus arteriosus form, accompanied by incremental increases in both WSS magnitude and ∇P, but a decrease in bi-directional flow. Laminar flow develops across the AV canal at 20-30 hpf, and persists at 110-120 hpf. Reynolds numbers at the AV canal increase from 0.07±0.03 at 20-30 hpf to 0.23±0.07 at 110-120 hpf (p< 0.05, n=6), whereas Womersley numbers remain relatively unchanged from 0.11 to 0.13. Our moving domain simulations highlights hemodynamic changes in relation to cardiac morphogenesis; thereby, providing a 2-D quantitative approach to complement imaging analysis.

  14. An overview of C. elegans biology.

    PubMed

    Strange, Kevin

    2006-01-01

    The establishment of Caenorhabditis elegans as a "model organism" began with the efforts of Sydney Brenner in the early 1960s. Brenner's focus was to find a suitable animal model in which the tools of genetic analysis could be used to define molecular mechanisms of development and nervous system function. C. elegans provides numerous experimental advantages for such studies. These advantages include a short life cycle, production of large numbers of offspring, easy and inexpensive laboratory culture, forward and reverse genetic tractability, and a relatively simple anatomy. This chapter will provide a brief overview of C. elegans biology.

  15. Efforts to make and apply humanized yeast

    PubMed Central

    Laurent, Jon M.; Young, Jonathan H.; Kachroo, Aashiq H.

    2016-01-01

    Despite a billion years of divergent evolution, the baker’s yeast Saccharomyces cerevisiae has long proven to be an invaluable model organism for studying human biology. Given its tractability and ease of genetic manipulation, along with extensive genetic conservation with humans, it is perhaps no surprise that researchers have been able to expand its utility by expressing human proteins in yeast, or by humanizing specific yeast amino acids, proteins or even entire pathways. These methods are increasingly being scaled in throughput, further enabling the detailed investigation of human biology and disease-specific variations of human genes in a simplified model organism. PMID:26462863

  16. Landau-Zener extension of the Tavis-Cummings model: Structure of the solution

    DOE PAGES

    Sun, Chen; Sinitsyn, Nikolai A.

    2016-09-07

    We explore the recently discovered solution of the driven Tavis-Cummings model (DTCM). It describes interaction of an arbitrary number of two-level systems with a bosonic mode that has linearly time-dependent frequency. We derive compact and tractable expressions for transition probabilities in terms of the well-known special functions. In this form, our formulas are suitable for fast numerical calculations and analytical approximations. As an application, we obtain the semiclassical limit of the exact solution and compare it to prior approximations. Furthermore, we also reveal connection between DTCM and q-deformed binomial statistics.

  17. Theory of multicolor lattice gas - A cellular automaton Poisson solver

    NASA Technical Reports Server (NTRS)

    Chen, H.; Matthaeus, W. H.; Klein, L. W.

    1990-01-01

    The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

  18. Analysis of the compressibility of a finite symmetrical nucleus in terms of a simple, analytically tractable model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheswari, V.S.U.; Ramamurthy, V.S.; Satpathy, L.

    1992-12-01

    The liquid-drop model type expansion of the finite nuclear compressibility coefficients {ital K}{sub {ital A}} is studied in an energy density formalism, using a leptodermous expansion of the energies. It is found that the effective curvature compressibility coefficient {ital K}{sub {ital c}} is always negative for Skyrme type forces. It is also shown that the unexpectedly large value of about {minus}800 MeV of the surface compressibility coefficient {ital K}{sub {ital s}} found by Sharma {ital et} {ital al}. is an artifact of their analysis procedure.

  19. Hydra as a tractable, long-lived model system for senescence.

    PubMed

    Bellantuono, Anthony J; Bridge, Diane; Martínez, Daniel E

    2015-01-30

    Hydra represents a unique model system for the study of senescence, with the opportunity for the comparison of non-aging and induced senescence. Hydra maintains three stem cell lineages, used for continuous tissue morphogenesis and replacement. Recent work has elucidated the roles of the insulin/IGF-1 signaling target FoxO, of Myc proteins, and of PIWI proteins in Hydra stem cells. Under laboratory culture conditions, Hydra vulgaris show no signs of aging even under long-term study. In contrast, Hydra oligactis can be experimentally induced to undergo reproduction-associated senescence. This provides a powerful comparative system for future studies.

  20. Spatial organization of bacterial chromosomes

    PubMed Central

    Wang, Xindan; Rudner, David Z.

    2014-01-01

    Bacterial chromosomes are organized in stereotypical patterns that are faithfully and robustly regenerated in daughter cells. Two distinct spatial patterns were described almost a decade ago in our most tractable model organisms. In recent years, analysis of chromosome organization in a larger and more diverse set of bacteria and a deeper characterization of chromosome dynamics in the original model systems have provided a broader and more complete picture of both chromosome organization and the activities that generate the observed spatial patterns. Here, we summarize these different patterns highlighting similarities and differences and discuss the protein factors that help establish and maintain them. PMID:25460798

  1. Data-driven, Interpretable Photometric Redshifts Trained on Heterogeneous and Unrepresentative Data

    NASA Astrophysics Data System (ADS)

    Leistedt, Boris; Hogg, David W.

    2017-03-01

    We present a new method for inferring photometric redshifts in deep galaxy and quasar surveys, based on a data-driven model of latent spectral energy distributions (SEDs) and a physical model of photometric fluxes as a function of redshift. This conceptually novel approach combines the advantages of both machine learning methods and template fitting methods by building template SEDs directly from the spectroscopic training data. This is made computationally tractable with Gaussian processes operating in flux-redshift space, encoding the physics of redshifts and the projection of galaxy SEDs onto photometric bandpasses. This method alleviates the need to acquire representative training data or to construct detailed galaxy SED models; it requires only that the photometric bandpasses and calibrations be known or have parameterized unknowns. The training data can consist of a combination of spectroscopic and deep many-band photometric data with reliable redshifts, which do not need to entirely spatially overlap with the target survey of interest or even involve the same photometric bands. We showcase the method on the I-magnitude-selected, spectroscopically confirmed galaxies in the COSMOS field. The model is trained on the deepest bands (from SUBARU and HST) and photometric redshifts are derived using the shallower SDSS optical bands only. We demonstrate that we obtain accurate redshift point estimates and probability distributions despite the training and target sets having very different redshift distributions, noise properties, and even photometric bands. Our model can also be used to predict missing photometric fluxes or to simulate populations of galaxies with realistic fluxes and redshifts, for example.

  2. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  3. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  4. Guidance and Control System for a Satellite Constellation

    NASA Technical Reports Server (NTRS)

    Bryson, Jonathan Lamar; Cox, James; Mays, Paul Richard; Neidhoefer, James Christian; Ephrain, Richard

    2010-01-01

    A distributed guidance and control algorithm was developed for a constellation of satellites. The system repositions satellites as required, regulates satellites to desired orbits, and prevents collisions. 1. Optimal methods are used to compute nominal transfers from orbit to orbit. 2. Satellites are regulated to maintain the desired orbits once the transfers are complete. 3. A simulator is used to predict potential collisions or near-misses. 4. Each satellite computes perturbations to its controls so as to increase any unacceptable distances of nearest approach to other objects. a. The avoidance problem is recast in a distributed and locally-linear form to arrive at a tractable solution. b. Plant matrix values are approximated via simulation at each time step. c. The Linear Quadratic Gaussian (LQG) method is used to compute perturbations to the controls that will result in increased miss distances. 5. Once all danger is passed, the satellites return to their original orbits, all the while avoiding each other as above. 6. The delta-Vs are reasonable. The controller begins maneuvers as soon as practical to minimize delta-V. 7. Despite the inclusion of trajectory simulations within the control loop, the algorithm is sufficiently fast for available satellite computer hardware. 8. The required measurement accuracies are within the capabilities of modern inertial measurement devices and modern positioning devices.

  5. Spectral functions of strongly correlated extended systems via an exact quantum embedding

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Chan, Garnet Kin-Lic

    2015-04-01

    Density matrix embedding theory (DMET) [Phys. Rev. Lett. 109, 186404 (2012), 10.1103/PhysRevLett.109.186404], introduced an approach to quantum cluster embedding methods whereby the mapping of strongly correlated bulk problems to an impurity with finite set of bath states was rigorously formulated to exactly reproduce the entanglement of the ground state. The formalism provided similar physics to dynamical mean-field theory at a tiny fraction of the cost but was inherently limited by the construction of a bath designed to reproduce ground-state, static properties. Here, we generalize the concept of quantum embedding to dynamic properties and demonstrate accurate bulk spectral functions at similarly small computational cost. The proposed spectral DMET utilizes the Schmidt decomposition of a response vector, mapping the bulk dynamic correlation functions to that of a quantum impurity cluster coupled to a set of frequency-dependent bath states. The resultant spectral functions are obtained on the real-frequency axis, without bath discretization error, and allows for the construction of arbitrary dynamic correlation functions. We demonstrate the method on the one- (1D) and two-dimensional (2D) Hubbard model, where we obtain zero temperature and thermodynamic limit spectral functions, and show the trivial extension to two-particle Green's functions. This advance therefore extends the scope and applicability of DMET in condensed-matter problems as a computationally tractable route to correlated spectral functions of extended systems and provides a competitive alternative to dynamical mean-field theory for dynamic quantities.

  6. Privacy preserving interactive record linkage (PPIRL).

    PubMed

    Kum, Hye-Chung; Krishnamurthy, Ashok; Machanavajjhala, Ashwin; Reiter, Michael K; Ahalt, Stanley

    2014-01-01

    Record linkage to integrate uncoordinated databases is critical in biomedical research using Big Data. Balancing privacy protection against the need for high quality record linkage requires a human-machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic Big Data. In the computer science literature, private record linkage is the most published area. It investigates how to apply a known linkage function safely when linking two tables. However, in practice, the linkage function is rarely known. Thus, there are many data linkage centers whose main role is to be the trusted third party to determine the linkage function manually and link data for research via a master population list for a designated region. Recently, a more flexible computerized third-party linkage platform, Secure Decoupled Linkage (SDLink), has been proposed based on: (1) decoupling data via encryption, (2) obfuscation via chaffing (adding fake data) and universe manipulation; and (3) minimum information disclosure via recoding. We synthesize this literature to formalize a new framework for privacy preserving interactive record linkage (PPIRL) with tractable privacy and utility properties and then analyze the literature using this framework. Human-based third-party linkage centers for privacy preserving record linkage are the accepted norm internationally. We find that a computer-based third-party platform that can precisely control the information disclosed at the micro level and allow frequent human interaction during the linkage process, is an effective human-machine hybrid system that significantly improves on the linkage center model both in terms of privacy and utility.

  7. Reinforcement learning and episodic memory in humans and animals: an integrative framework

    PubMed Central

    Gershman, Samuel J.; Daw, Nathaniel D.

    2018-01-01

    We review the psychology and neuroscience of reinforcement learning (RL), which has witnessed significant progress in the last two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, the simplicity of these tasks misses important aspects of reinforcement learning in the real world: (i) State spaces are high-dimensional, continuous, and partially observable; this implies that (ii) data are relatively sparse: indeed precisely the same situation may never be encountered twice; and also that (iii) rewards depend on long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, these theories have largely connected with procedural and semantic memory: how knowledge about action values or world models extracted gradually from many experiences can drive choice. This misses many aspects of memory related to traces of individual events, such as episodic memory. We suggest that these two gaps are related. In particular, the computational challenges can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (i) efficiently approximate value functions over complex state spaces, (ii) learn with very little data, and (iii) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system. PMID:27618944

  8. Signature detection and matching for document image retrieval.

    PubMed

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  9. A model for solving the prescribed burn planning problem.

    PubMed

    Rachmawati, Ramya; Ozlen, Melih; Reinke, Karin J; Hearne, John W

    2015-01-01

    The increasing frequency of destructive wildfires, with a consequent loss of life and property, has led to fire and land management agencies initiating extensive fuel management programs. This involves long-term planning of fuel reduction activities such as prescribed burning or mechanical clearing. In this paper, we propose a mixed integer programming (MIP) model that determines when and where fuel reduction activities should take place. The model takes into account multiple vegetation types in the landscape, their tolerance to frequency of fire events, and keeps track of the age of each vegetation class in each treatment unit. The objective is to minimise fuel load over the planning horizon. The complexity of scheduling fuel reduction activities has led to the introduction of sophisticated mathematical optimisation methods. While these approaches can provide optimum solutions, they can be computationally expensive, particularly for fuel management planning which extends across the landscape and spans long term planning horizons. This raises the question of how much better do exact modelling approaches compare to simpler heuristic approaches in their solutions. To answer this question, the proposed model is run using an exact MIP (using commercial MIP solver) and two heuristic approaches that decompose the problem into multiple single-period sub problems. The Knapsack Problem (KP), which is the first heuristic approach, solves the single period problems, using an exact MIP approach. The second heuristic approach solves the single period sub problem using a greedy heuristic approach. The three methods are compared in term of model tractability, computational time and the objective values. The model was tested using randomised data from 711 treatment units in the Barwon-Otway district of Victoria, Australia. Solutions for the exact MIP could be obtained for up to a 15-year planning only using a standard implementation of CPLEX. Both heuristic approaches can solve significantly larger problems, involving 100-year or even longer planning horizons. Furthermore there are no substantial differences in the solutions produced by the three approaches. It is concluded that for practical purposes a heuristic method is to be preferred to the exact MIP approach.

  10. Large-scale symmetry-adapted perturbation theory computations via density fitting and Laplace transformation techniques: investigating the fundamental forces of DNA-intercalator interactions.

    PubMed

    Hohenstein, Edward G; Parrish, Robert M; Sherrill, C David; Turney, Justin M; Schaefer, Henry F

    2011-11-07

    Symmetry-adapted perturbation theory (SAPT) provides a means of probing the fundamental nature of intermolecular interactions. Low-orders of SAPT (here, SAPT0) are especially attractive since they provide qualitative (sometimes quantitative) results while remaining tractable for large systems. The application of density fitting and Laplace transformation techniques to SAPT0 can significantly reduce the expense associated with these computations and make even larger systems accessible. We present new factorizations of the SAPT0 equations with density-fitted two-electron integrals and the first application of Laplace transformations of energy denominators to SAPT. The improved scalability of the DF-SAPT0 implementation allows it to be applied to systems with more than 200 atoms and 2800 basis functions. The Laplace-transformed energy denominators are compared to analogous partial Cholesky decompositions of the energy denominator tensor. Application of our new DF-SAPT0 program to the intercalation of DNA by proflavine has allowed us to determine the nature of the proflavine-DNA interaction. Overall, the proflavine-DNA interaction contains important contributions from both electrostatics and dispersion. The energetics of the intercalator interaction are are dominated by the stacking interactions (two-thirds of the total), but contain important contributions from the intercalator-backbone interactions. It is hypothesized that the geometry of the complex will be determined by the interactions of the intercalator with the backbone, because by shifting toward one side of the backbone, the intercalator can form two long hydrogen-bonding type interactions. The long-range interactions between the intercalator and the next-nearest base pairs appear to be negligible, justifying the use of truncated DNA models in computational studies of intercalation interaction energies.

  11. Large-scale symmetry-adapted perturbation theory computations via density fitting and Laplace transformation techniques: Investigating the fundamental forces of DNA-intercalator interactions

    NASA Astrophysics Data System (ADS)

    Hohenstein, Edward G.; Parrish, Robert M.; Sherrill, C. David; Turney, Justin M.; Schaefer, Henry F.

    2011-11-01

    Symmetry-adapted perturbation theory (SAPT) provides a means of probing the fundamental nature of intermolecular interactions. Low-orders of SAPT (here, SAPT0) are especially attractive since they provide qualitative (sometimes quantitative) results while remaining tractable for large systems. The application of density fitting and Laplace transformation techniques to SAPT0 can significantly reduce the expense associated with these computations and make even larger systems accessible. We present new factorizations of the SAPT0 equations with density-fitted two-electron integrals and the first application of Laplace transformations of energy denominators to SAPT. The improved scalability of the DF-SAPT0 implementation allows it to be applied to systems with more than 200 atoms and 2800 basis functions. The Laplace-transformed energy denominators are compared to analogous partial Cholesky decompositions of the energy denominator tensor. Application of our new DF-SAPT0 program to the intercalation of DNA by proflavine has allowed us to determine the nature of the proflavine-DNA interaction. Overall, the proflavine-DNA interaction contains important contributions from both electrostatics and dispersion. The energetics of the intercalator interaction are are dominated by the stacking interactions (two-thirds of the total), but contain important contributions from the intercalator-backbone interactions. It is hypothesized that the geometry of the complex will be determined by the interactions of the intercalator with the backbone, because by shifting toward one side of the backbone, the intercalator can form two long hydrogen-bonding type interactions. The long-range interactions between the intercalator and the next-nearest base pairs appear to be negligible, justifying the use of truncated DNA models in computational studies of intercalation interaction energies.

  12. E-RNAi: a web application for the multi-species design of RNAi reagents—2010 update

    PubMed Central

    Horn, Thomas; Boutros, Michael

    2010-01-01

    The design of RNA interference (RNAi) reagents is an essential step for performing loss-of-function studies in many experimental systems. The availability of sequenced and annotated genomes greatly facilitates RNAi experiments in an increasing number of organisms that were previously not genetically tractable. The E-RNAi web-service, accessible at http://www.e-rnai.org/, provides a computational resource for the optimized design and evaluation of RNAi reagents. The 2010 update of E-RNAi now covers 12 genomes, including Drosophila, Caenorhabditis elegans, human, emerging model organisms such as Schmidtea mediterranea and Acyrthosiphon pisum, as well as the medically relevant vectors Anopheles gambiae and Aedes aegypti. The web service calculates RNAi reagents based on the input of target sequences, sequence identifiers or by visual selection of target regions through a genome browser interface. It identifies optimized RNAi target-sites by ranking sequences according to their predicted specificity, efficiency and complexity. E-RNAi also facilitates the design of secondary RNAi reagents for validation experiments, evaluation of pooled siRNA reagents and batch design. Results are presented online, as a downloadable HTML report and as tab-delimited files. PMID:20444868

  13. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  14. Compositional and enumerative designs for medical language representation.

    PubMed Central

    Rassinoux, A. M.; Miller, R. A.; Baud, R. H.; Scherrer, J. R.

    1997-01-01

    Medical language is in essence highly compositional, allowing complex information to be expressed from more elementary pieces. Embedding the expressive power of medical language into formal systems of representation is recognized in the medical informatics community as a key step towards sharing such information among medical record, decision support, and information retrieval systems. Accordingly, such representation requires managing both the expressiveness of the formalism and its computational tractability, while coping with the level of detail expected by clinical applications. These desiderata can be supported by enumerative as well as compositional approaches, as argued in this paper. These principles have been applied in recasting a frame-based system for general medical findings developed during the 1980s. The new system captures the precise meaning of a subset of over 1500 medical terms for general internal medicine identified from the Quick Medical Reference (QMR) lexicon. In order to evaluate the adequacy of this formal structure in reflecting the deep meaning of the QMR findings, a validation process was implemented. It consists of automatically rebuilding the semantic representation of the QMR findings by analyzing them through the RECIT natural language analyzer, whose semantic components have been adjusted to this frame-based model for the understanding task. PMID:9357700

  15. Compositional and enumerative designs for medical language representation.

    PubMed

    Rassinoux, A M; Miller, R A; Baud, R H; Scherrer, J R

    1997-01-01

    Medical language is in essence highly compositional, allowing complex information to be expressed from more elementary pieces. Embedding the expressive power of medical language into formal systems of representation is recognized in the medical informatics community as a key step towards sharing such information among medical record, decision support, and information retrieval systems. Accordingly, such representation requires managing both the expressiveness of the formalism and its computational tractability, while coping with the level of detail expected by clinical applications. These desiderata can be supported by enumerative as well as compositional approaches, as argued in this paper. These principles have been applied in recasting a frame-based system for general medical findings developed during the 1980s. The new system captures the precise meaning of a subset of over 1500 medical terms for general internal medicine identified from the Quick Medical Reference (QMR) lexicon. In order to evaluate the adequacy of this formal structure in reflecting the deep meaning of the QMR findings, a validation process was implemented. It consists of automatically rebuilding the semantic representation of the QMR findings by analyzing them through the RECIT natural language analyzer, whose semantic components have been adjusted to this frame-based model for the understanding task.

  16. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    NASA Technical Reports Server (NTRS)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  17. Unifying perspective: Solitary traveling waves as discrete breathers in Hamiltonian lattices and energy criteria for their stability

    NASA Astrophysics Data System (ADS)

    Cuevas-Maraver, Jesús; Kevrekidis, Panayotis G.; Vainchtein, Anna; Xu, Haitao

    2017-09-01

    In this work, we provide two complementary perspectives for the (spectral) stability of solitary traveling waves in Hamiltonian nonlinear dynamical lattices, of which the Fermi-Pasta-Ulam and the Toda lattice are prototypical examples. One is as an eigenvalue problem for a stationary solution in a cotraveling frame, while the other is as a periodic orbit modulo shifts. We connect the eigenvalues of the former with the Floquet multipliers of the latter and using this formulation derive an energy-based spectral stability criterion. It states that a sufficient (but not necessary) condition for a change in the wave stability occurs when the functional dependence of the energy (Hamiltonian) H of the model on the wave velocity c changes its monotonicity. Moreover, near the critical velocity where the change of stability occurs, we provide an explicit leading-order computation of the unstable eigenvalues, based on the second derivative of the Hamiltonian H''(c0) evaluated at the critical velocity c0. We corroborate this conclusion with a series of analytically and numerically tractable examples and discuss its parallels with a recent energy-based criterion for the stability of discrete breathers.

  18. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Bacon, John B.; Matney, Mark

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.

  19. Dielectric Screening Meets Optimally Tuned Density Functionals.

    PubMed

    Kronik, Leeor; Kümmel, Stephan

    2018-04-17

    A short overview of recent attempts at merging two independently developed methods is presented. These are the optimal tuning of a range-separated hybrid (OT-RSH) functional, developed to provide an accurate first-principles description of the electronic structure and optical properties of gas-phase molecules, and the polarizable continuum model (PCM), developed to provide an approximate but computationally tractable description of a solvent in terms of an effective dielectric medium. After a brief overview of the OT-RSH approach, its combination with the PCM as a potentially accurate yet low-cost approach to the study of molecular assemblies and solids, particularly in the context of photocatalysis and photovoltaics, is discussed. First, solvated molecules are considered, with an emphasis on the challenge of balancing eigenvalue and total energy trends. Then, it is shown that the same merging of methods can also be used to study the electronic and optical properties of molecular solids, with a similar discussion of the pros and cons. Tuning of the effective scalar dielectric constant as one recent approach that mitigates some of the difficulties in merging the two approaches is considered. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Photofragmentation of Gas-Phase Lanthanide Cyclopentadienyl Complexes: Experimental and Time-Dependent Excited-State Molecular Dynamics

    PubMed Central

    2015-01-01

    Unimolecular gas-phase laser-photodissociation reaction mechanisms of open-shell lanthanide cyclopentadienyl complexes, Ln(Cp)3 and Ln(TMCp)3, are analyzed from experimental and computational perspectives. The most probable pathways for the photoreactions are inferred from photoionization time-of-flight mass spectrometry (PI-TOF-MS), which provides the sequence of reaction intermediates and the distribution of final products. Time-dependent excited-state molecular dynamics (TDESMD) calculations provide insight into the electronic mechanisms for the individual steps of the laser-driven photoreactions for Ln(Cp)3. Computational analysis correctly predicts several key reaction products as well as the observed branching between two reaction pathways: (1) ligand ejection and (2) ligand cracking. Simulations support our previous assertion that both reaction pathways are initiated via a ligand-to-metal charge-transfer (LMCT) process. For the more complex chemistry of the tetramethylcyclopentadienyl complexes Ln(TMCp)3, TMESMD is less tractable, but computational geometry optimization reveals the structures of intermediates deduced from PI-TOF-MS, including several classic “tuck-in” structures and products of Cp ring expansion. The results have important implications for metal–organic catalysis and laser-assisted metal–organic chemical vapor deposition (LCVD) of insulators with high dielectric constants. PMID:24910492

  1. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Covalent Ligand Discovery against Druggable Hotspots Targeted by Anti-cancer Natural Products.

    PubMed

    Grossman, Elizabeth A; Ward, Carl C; Spradlin, Jessica N; Bateman, Leslie A; Huffman, Tucker R; Miyamoto, David K; Kleinman, Jordan I; Nomura, Daniel K

    2017-11-16

    Many natural products that show therapeutic activities are often difficult to synthesize or isolate and have unknown targets, hindering their development as drugs. Identifying druggable hotspots targeted by covalently acting anti-cancer natural products can enable pharmacological interrogation of these sites with more synthetically tractable compounds. Here, we used chemoproteomic platforms to discover that the anti-cancer natural product withaferin A targets C377 on the regulatory subunit PPP2R1A of the tumor-suppressor protein phosphatase 2A (PP2A) complex leading to activation of PP2A activity, inactivation of AKT, and impaired breast cancer cell proliferation. We developed a more synthetically tractable cysteine-reactive covalent ligand, JNS 1-40, that selectively targets C377 of PPP2R1A to impair breast cancer signaling, proliferation, and in vivo tumor growth. Our study highlights the utility of using chemoproteomics to map druggable hotspots targeted by complex natural products and subsequently interrogating these sites with more synthetically tractable covalent ligands for cancer therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming

    NASA Astrophysics Data System (ADS)

    Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders

    The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.

  4. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  5. How Suspicion Grows: Effects of Population Size on Cooperation

    DTIC Science & Technology

    2014-09-01

    suspicious tit-for-tat TFT tit-for-tat TF2T tit-for-two-tats xiii THIS PAGE INTENTIONALLY LEFT BLANK xiv Executive Summary People in a group typically...theoretic model mathematically tractable, we restrict each individual to four strategies: tit-for-two-tats (TF2T), tit-for-tat ( TFT ), suspicious-tit...stranger, TF2T will begin by cooperation twice, TFT by cooperating once, and STFT by defecting once. After the initial moves, in each encounter, the

  6. Toward a Greater Understanding of the Reduction of Drift Coefficients in the Presence of Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelbrecht, N. E.; Strauss, R. D.; Burger, R. A.

    2017-06-01

    Drift effects play a significant role in the transport of charged particles in the heliosphere. A turbulent magnetic field is also known to reduce the effects of particle drifts. The exact nature of this reduction, however, is not clear. This study aims to provide some insight into this reduction and proposes a relatively simple, tractable means of modeling it that provides results in reasonable agreement with numerical simulations of the drift coefficient in a turbulent magnetic field.

  7. Chemical potential driven phase transition of black holes in anti-de Sitter space

    NASA Astrophysics Data System (ADS)

    Galante, Mario; Giribet, Gaston; Goya, Andrés; Oliva, Julio

    2015-11-01

    Einstein-Maxwell theory conformally coupled to a scalar field in D dimensions may exhibit a phase transition at low temperature whose end point is an asymptotically anti-de Sitter black hole with a scalar field profile that is regular everywhere outside and on the horizon. This provides a tractable model to study the phase transition of hairy black holes in anti-de Sitter space in which the backreaction on the geometry can be solved analytically.

  8. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  9. Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland

    1998-01-01

    Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.

  10. Visual recognition and inference using dynamic overcomplete sparse learning.

    PubMed

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  11. To the theory of particle lifting by terrestrial and Martian dust devils

    NASA Astrophysics Data System (ADS)

    Kurgansky, M. V.

    2018-01-01

    The combined Rankine vortex model is applied to describe the radial profile of azimuthal velocity in atmospheric dust devils, and a simplified model version is proposed of the turbulent surface boundary layer beneath the Rankine vortex periphery that corresponds to the potential vortex. Based on the results by Burggraf et al. (1971), it is accepted that the radial velocity near the ground in the potential vortex greatly exceeds the azimuthal velocity, which makes tractable the problem of the surface shear stress determination, including the case of the turbulent surface boundary layer. The constructed model explains exceeding the threshold shear velocity for aeolian transport in typical dust-devil vortices both on Earth and on Mars.

  12. Effects of Network Structure, Competition and Memory Time on Social Spreading Phenomena

    NASA Astrophysics Data System (ADS)

    Gleeson, James P.; O'Sullivan, Kevin P.; Baños, Raquel A.; Moreno, Yamir

    2016-04-01

    Online social media has greatly affected the way in which we communicate with each other. However, little is known about what fundamental mechanisms drive dynamical information flow in online social systems. Here, we introduce a generative model for online sharing behavior that is analytically tractable and that can reproduce several characteristics of empirical micro-blogging data on hashtag usage, such as (time-dependent) heavy-tailed distributions of meme popularity. The presented framework constitutes a null model for social spreading phenomena that, in contrast to purely empirical studies or simulation-based models, clearly distinguishes the roles of two distinct factors affecting meme popularity: the memory time of users and the connectivity structure of the social network.

  13. Analytical performance study of solar blind non-line-of-sight ultraviolet short-range communication links.

    PubMed

    Xu, Zhengyuan; Ding, Haipeng; Sadler, Brian M; Chen, Gang

    2008-08-15

    Motivated by recent advances in solid-state incoherent ultraviolet sources and solar blind detectors, we study communication link performance over a range of less than 1 km with a bit error rate (BER) below 10(-3) in solar blind non-line-of-sight situation. The widely adopted yet complex single scattering channel model is significantly simplified by means of a closed-form expression for tractable analysis. Path loss is given as a function of transceiver geometry as well as atmospheric scattering and attenuation and is compared with experimental data for model validation. The BER performance of a shot-noise-limited receiver under this channel model is demonstrated.

  14. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    PubMed

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  15. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    DOE PAGES

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less

  16. Density functional theory for d- and f-electron materials and compounds

    DOE PAGES

    Mattson, Ann E.; Wills, John M.

    2016-02-12

    Here, the fundamental requirements for a computationally tractable Density Functional Theory-based method for relativistic f- and (nonrelativistic) d-electron materials and compounds are presented. The need for basing the Kohn–Sham equations on the Dirac equation is discussed. The full Dirac scheme needs exchange-correlation functionals in terms of four-currents, but ordinary functionals, using charge density and spin-magnetization, can be used in an approximate Dirac treatment. The construction of a functional that includes the additional confinement physics needed for these materials is illustrated using the subsystem-functional scheme. If future studies show that a full Dirac, four-current based, exchange-correlation functional is needed, the subsystemmore » functional scheme is one of the few schemes that can still be used for constructing functional approximations.« less

  17. Holography as deep learning

    NASA Astrophysics Data System (ADS)

    Gan, Wen-Cong; Shu, Fu-Wen

    Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu-Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.

  18. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  19. Stochastic dynamics of cholera epidemics

    NASA Astrophysics Data System (ADS)

    Azaele, Sandro; Maritan, Amos; Bertuzzo, Enrico; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea

    2010-05-01

    We describe the predictions of an analytically tractable stochastic model for cholera epidemics following a single initial outbreak. The exact model relies on a set of assumptions that may restrict the generality of the approach and yet provides a realm of powerful tools and results. Without resorting to the depletion of susceptible individuals, as usually assumed in deterministic susceptible-infected-recovered models, we show that a simple stochastic equation for the number of ill individuals provides a mechanism for the decay of the epidemics occurring on the typical time scale of seasonality. The model is shown to provide a reasonably accurate description of the empirical data of the 2000/2001 cholera epidemic which took place in the Kwa Zulu-Natal Province, South Africa, with possibly notable epidemiological implications.

  20. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  1. Comparison of DSMC and CFD Solutions of Fire II Including Radiative Heating

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Johnston, Christopher O.; Lewis, Mark J.

    2011-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. These flows may also contain significant radiative heating. To prepare for these missions, NASA is developing the capability to simulate rarefied, ionized flows and to then calculate the resulting radiative heating to the vehicle's surface. In this study, the DSMC codes DAC and DS2V are used to obtain charge-neutral ionization solutions. NASA s direct simulation Monte Carlo code DAC is currently being updated to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced Quantum-Kinetic chemistry model, and to include electronic energy levels as an additional internal energy mode. The Fire II flight test is used in this study to assess these new capabilities. The 1634 second data point was chosen for comparisons to be made in order to include comparisons to computational fluid dynamics solutions. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid. It is shown that there can be quite a bit of variability in the vibrational temperature inferred from DSMC solutions and that, from how radiative heating is computed, the electronic temperature is much better suited for radiative calculations. To include the radiative portion of heating, the flow-field solutions are post-processed by the non-equilibrium radiation code HARA. Acceptable agreement between CFD and DSMC flow field solutions is demonstrated and the progress of the updates to DAC, along with an appropriate radiative heating solution, are discussed. In addition, future plans to generate more high fidelity radiative heat transfer solutions are discussed.

  2. Translation of Land Surface Model Accuracy and Uncertainty into Coupled Land-Atmosphere Prediction

    NASA Technical Reports Server (NTRS)

    Santanello, Joseph A.; Kumar, Sujay; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Zhou, Shuija

    2012-01-01

    Land-atmosphere (L-A) Interactions playa critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface heat and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty estimation module in NASA's Land Information System (US-OPT/UE), whereby parameter sets are calibrated in the Noah land surface model and classified according to a land cover and soil type mapping of the observation sites to the full model domain. The impact of calibrated parameters on the a) spinup of the land surface used as initial conditions, and b) heat and moisture states and fluxes of the coupled WRF Simulations are then assessed in terms of ambient weather and land-atmosphere coupling along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Finally, tradeoffs of computational tractability and scientific validity, and the potential for combining this approach with satellite remote sensing data are also discussed.

  3. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization.

    PubMed

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-03-15

    Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo The krdf tool and associated executable examples are available at http://purl.org/rbm/rbmo/krdf anil.wipat@newcastle.ac.uk or vdanos@inf.ed.ac.uk. © The Author 2015. Published by Oxford University Press.

  4. Refined stratified-worm-burden models that incorporate specific biological features of human and snail hosts provide better estimates of Schistosoma diagnosis, transmission, and control.

    PubMed

    Gurarie, David; King, Charles H; Yoon, Nara; Li, Emily

    2016-08-04

    Schistosoma parasites sustain a complex transmission process that cycles between a definitive human host, two free-swimming larval stages, and an intermediate snail host. Multiple factors modify their transmission and affect their control, including heterogeneity in host populations and environment, the aggregated distribution of human worm burdens, and features of parasite reproduction and host snail biology. Because these factors serve to enhance local transmission, their inclusion is important in attempting accurate quantitative prediction of the outcomes of schistosomiasis control programs. However, their inclusion raises many mathematical and computational challenges. To address these, we have recently developed a tractable stratified worm burden (SWB) model that occupies an intermediate place between simpler deterministic mean worm burden models and the very computationally-intensive, autonomous agent models. To refine the accuracy of model predictions, we modified an earlier version of the SWB by incorporating factors representing essential in-host biology (parasite mating, aggregation, density-dependent fecundity, and random egg-release) into demographically structured host communities. We also revised the snail component of the transmission model to reflect a saturable form of human-to-snail transmission. The new model allowed us to realistically simulate overdispersed egg-test results observed in individual-level field data. We further developed a Bayesian-type calibration methodology that accounted for model and data uncertainties. The new model methodology was applied to multi-year, individual-level field data on S. haematobium infections in coastal Kenya. We successfully derived age-specific estimates of worm burden distributions and worm fecundity and crowding functions for children and adults. Estimates from the new SWB model were compared with those from the older, simpler SWB with some substantial differences noted. We validated our new SWB estimates in prediction of drug treatment-based control outcomes for a typical Kenyan community. The new version of the SWB model provides a better tool to predict the outcomes of ongoing schistosomiasis control programs. It reflects parasite features that augment and perpetuate transmission, while it also readily incorporates differences in diagnostic testing and human sub-population differences in treatment coverage. Once extended to other Schistosoma species and transmission environments, it will provide a useful and efficient tool for planning control and elimination strategies.

  5. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Issen, Kathleen

    2017-06-05

    This project employed a continuum approach to formulate an elastic constitutive model for Castlegate sandstone. The resulting constitutive framework for high porosity sandstone is thermodynamically sound, (i.e., does not violate the 1st and 2nd law of thermodynamics), represents known material constitutive response, and is able to be calibrated using available mechanical response data. To authenticate the accuracy of this model, a series of validation criteria were employed, using an existing mechanical response data set for Castlegate sandstone. The resulting constitutive framework is applicable to high porosity sandstones in general, and is tractable for scientists and researchers endeavoring to solve problemsmore » of practical interest.« less

  6. Evolution of host innate defence: insights from C. elegans and primitive invertebrates

    PubMed Central

    Irazoqui, Javier E.; Urbach, Jonathan M.; Ausubel, Frederick M.

    2010-01-01

    Preface The genetically tractable model organism Caenorhabditis elegans was first used to model bacterial virulence in vivo a decade ago. Since then, great strides have been made in the identification of host response pathways that are involved in the defence against infection. Strikingly, C. elegans seems to detect and respond to infection without the involvement of its Toll-like receptor homologue, in contrast to the well-established role for these proteins in innate immunity in mammals. What, therefore, do we know about host defence mechanisms in C. elegans, and what can they tell us about innate immunity in higher organisms? PMID:20029447

  7. Hydra as a tractable, long-lived model system for senescence

    PubMed Central

    Bellantuono, Anthony J.; Bridge, Diane; Martínez, Daniel E.

    2015-01-01

    Hydra represents a unique model system for the study of senescence, with the opportunity for the comparison of non-aging and induced senescence. Hydra maintains three stem cell lineages, used for continuous tissue morphogenesis and replacement. Recent work has elucidated the roles of the insulin/IGF-1 signaling target FoxO, of Myc proteins, and of PIWI proteins in Hydra stem cells. Under laboratory culture conditions, Hydra vulgaris show no signs of aging even under long-term study. In contrast, Hydra oligactis can be experimentally induced to undergo reproduction-associated senescence. This provides a powerful comparative system for future studies. PMID:26136619

  8. Compressive Sensing via Nonlocal Smoothed Rank Function

    PubMed Central

    Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le

    2016-01-01

    Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683

  9. Chapter 4. New model systems for the study of developmental evolution in plants.

    PubMed

    Kramer, Elena M

    2009-01-01

    The number of genetically tractable plant model systems is rapidly increasing, thanks to the decreasing cost of sequencing and the wide amenability of plants to stable transformation and other functional approaches. In this chapter, I discuss emerging model systems from throughout the land plant phylogeny and consider how their unique attributes are contributing to our understanding of development, evolution, and ecology. These new models are being developed using two distinct strategies: in some cases, they are selected because of their close relationship to the established models, while in others, they are chosen with the explicit intention of exploring distantly related plant lineages. Such complementary approaches are yielding exciting new results that shed light on both micro- and macroevolutionary processes in the context of developmental evolution.

  10. Choosing Meteorological Input for the Global Modeling Initiative Assessment of High Speed Aircraft

    NASA Technical Reports Server (NTRS)

    Douglas, A. R.; Prather, M. P.; Hall, T. M.; Strahan, S. E.; Rasch, P. J.; Sparling, L. C.; Coy, L.; Rodriquez, J. M.

    1998-01-01

    The Global Modeling Initiative (GMI) science team is developing a three dimensional chemistry and transport model (CTM) to be used in assessment of the atmospheric effects of aviation. Requirements are that this model be documented, be validated against observations, use a realistic atmospheric circulation, and contain numerical transport and photochemical modules representing atmospheric processes. The model must also retain computational efficiency to be tractable to use for multiple scenarios and sensitivity studies. To meet these requirements, a facility model concept was developed in which the different components of the CTM are evaluated separately. The first use of the GMI model will be to evaluate the impact of the exhaust of supersonic aircraft on the stratosphere. The assessment calculations will depend strongly on the wind and temperature fields used by the CTM. Three meteorological data sets for the stratosphere are available to GMI: the National Center for Atmospheric Research Community Climate Model (CCM2), the Goddard Earth Observing System Data Assimilation System (GEOS DAS), and the Goddard Institute for Space Studies general circulation model (GISS). Objective criteria were established by the GMI team to identify the data set which provides the best representation of the stratosphere. Simulations of gases with simple chemical control were chosen to test various aspects of model transport. The three meteorological data sets were evaluated and graded based on their ability to simulate these aspects of stratospheric measurements. This paper describes the criteria used in grading the meteorological fields. The meteorological data set which has the highest score and therefore was selected for GMI is CCM2. This type of objective model evaluation establishes a physical basis for interpretation of differences between models and observations. Further, the method provides a quantitative basis for defining model errors, for discriminating between different models, and for ready re-evaluation of improved models. These in turn will lead to a higher level of confidence in assessment calculations.

  11. The Drosophila melanogaster host model

    PubMed Central

    Igboin, Christina O.; Griffen, Ann L.; Leys, Eugene J.

    2012-01-01

    The deleterious and sometimes fatal outcomes of bacterial infectious diseases are the net result of the interactions between the pathogen and the host, and the genetically tractable fruit fly, Drosophila melanogaster, has emerged as a valuable tool for modeling the pathogen–host interactions of a wide variety of bacteria. These studies have revealed that there is a remarkable conservation of bacterial pathogenesis and host defence mechanisms between higher host organisms and Drosophila. This review presents an in-depth discussion of the Drosophila immune response, the Drosophila killing model, and the use of the model to examine bacterial–host interactions. The recent introduction of the Drosophila model into the oral microbiology field is discussed, specifically the use of the model to examine Porphyromonas gingivalis–host interactions, and finally the potential uses of this powerful model system to further elucidate oral bacterial-host interactions are addressed. PMID:22368770

  12. The Drosophila melanogaster host model.

    PubMed

    Igboin, Christina O; Griffen, Ann L; Leys, Eugene J

    2012-01-01

    The deleterious and sometimes fatal outcomes of bacterial infectious diseases are the net result of the interactions between the pathogen and the host, and the genetically tractable fruit fly, Drosophila melanogaster, has emerged as a valuable tool for modeling the pathogen-host interactions of a wide variety of bacteria. These studies have revealed that there is a remarkable conservation of bacterial pathogenesis and host defence mechanisms between higher host organisms and Drosophila. This review presents an in-depth discussion of the Drosophila immune response, the Drosophila killing model, and the use of the model to examine bacterial-host interactions. The recent introduction of the Drosophila model into the oral microbiology field is discussed, specifically the use of the model to examine Porphyromonas gingivalis-host interactions, and finally the potential uses of this powerful model system to further elucidate oral bacterial-host interactions are addressed.

  13. Binding-Site Compatible Fragment Growing Applied to the Design of β2-Adrenergic Receptor Ligands.

    PubMed

    Chevillard, Florent; Rimmer, Helena; Betti, Cecilia; Pardon, Els; Ballet, Steven; van Hilten, Niek; Steyaert, Jan; Diederich, Wibke E; Kolb, Peter

    2018-02-08

    Fragment-based drug discovery is intimately linked to fragment extension approaches that can be accelerated using software for de novo design. Although computers allow for the facile generation of millions of suggestions, synthetic feasibility is however often neglected. In this study we computationally extended, chemically synthesized, and experimentally assayed new ligands for the β 2 -adrenergic receptor (β 2 AR) by growing fragment-sized ligands. In order to address the synthetic tractability issue, our in silico workflow aims at derivatized products based on robust organic reactions. The study started from the predicted binding modes of five fragments. We suggested a total of eight diverse extensions that were easily synthesized, and further assays showed that four products had an improved affinity (up to 40-fold) compared to their respective initial fragment. The described workflow, which we call "growing via merging" and for which the key tools are available online, can improve early fragment-based drug discovery projects, making it a useful creative tool for medicinal chemists during structure-activity relationship (SAR) studies.

  14. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  15. Laminar soot processes

    NASA Technical Reports Server (NTRS)

    Sunderland, P. B.; Lin, K.-C.; Faeth, G. M.

    1995-01-01

    Soot processes within hydrocarbon fueled flames are important because they affect the durability and performance of propulsion systems, the hazards of unwanted fires, the pollutant and particulate emissions from combustion processes, and the potential for developing computational combustion. Motivated by these observations, the present investigation is studying soot processes in laminar diffusion and premixed flames in order to better understand the soot and thermal radiation emissions of luminous flames. Laminar flames are being studied due to their experimental and computational tractability, noting the relevance of such results to practical turbulent flames through the laminar flamelet concept. Weakly-buoyant and nonbuoyant laminar diffusion flames are being considered because buoyancy affects soot processes in flames while most practical flames involve negligible effects of buoyancy. Thus, low-pressure weakly-buoyant flames are being observed during ground-based experiments while near atmospheric pressure nonbuoyant flames will be observed during space flight experiments at microgravity. Finally, premixed laminar flames also are being considered in order to observe some aspects of soot formation for simpler flame conditions than diffusion flames. The main emphasis of current work has been on measurements of soot nucleation and growth in laminar diffusion and premixed flames.

  16. Systems neuroscience in Drosophila: Conceptual and technical advantages.

    PubMed

    Kazama, H

    2015-06-18

    The fruit fly Drosophila melanogaster is ideally suited for investigating the neural circuit basis of behavior. Due to the simplicity and genetic tractability of the fly brain, neurons and circuits are identifiable across animals. Additionally, a large set of transgenic lines has been developed with the aim of specifically labeling small subsets of neurons and manipulating them in sophisticated ways. Electrophysiology and imaging can be applied in behaving individuals to examine the computations performed by each neuron, and even the entire population of relevant neurons in a particular region, because of the small size of the brain. Moreover, a rich repertoire of behaviors that can be studied is expanding to include those requiring cognitive abilities. Thus, the fly brain is an attractive system in which to explore both computations and mechanisms underlying behavior at levels spanning from genes through neurons to circuits. This review summarizes the advantages Drosophila offers in achieving this objective. A recent neurophysiology study on olfactory behavior is also introduced to demonstrate the effectiveness of these advantages. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. To cut or not to cut? Assessing the modular structure of brain networks.

    PubMed

    Chang, Yu-Teng; Pantazis, Dimitrios; Leahy, Richard M

    2014-05-01

    A wealth of methods has been developed to identify natural divisions of brain networks into groups or modules, with one of the most prominent being modularity. Compared with the popularity of methods to detect community structure, only a few methods exist to statistically control for spurious modules, relying almost exclusively on resampling techniques. It is well known that even random networks can exhibit high modularity because of incidental concentration of edges, even though they have no underlying organizational structure. Consequently, interpretation of community structure is confounded by the lack of principled and computationally tractable approaches to statistically control for spurious modules. In this paper we show that the modularity of random networks follows a transformed version of the Tracy-Widom distribution, providing for the first time a link between module detection and random matrix theory. We compute parametric formulas for the distribution of modularity for random networks as a function of network size and edge variance, and show that we can efficiently control for false positives in brain and other real-world networks. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Organelle Size Scaling of the Budding Yeast Vacuole by Relative Growth and Inheritance.

    PubMed

    Chan, Yee-Hung M; Reyes, Lorena; Sohail, Saba M; Tran, Nancy K; Marshall, Wallace F

    2016-05-09

    It has long been noted that larger animals have larger organs compared to smaller animals of the same species, a phenomenon termed scaling [1]. Julian Huxley proposed an appealingly simple model of "relative growth"-in which an organ and the whole body grow with their own intrinsic rates [2]-that was invoked to explain scaling in organs from fiddler crab claws to human brains. Because organ size is regulated by complex, unpredictable pathways [3], it remains unclear whether scaling requires feedback mechanisms to regulate organ growth in response to organ or body size. The molecular pathways governing organelle biogenesis are simpler than organogenesis, and therefore organelle size scaling in the cell provides a more tractable case for testing Huxley's model. We ask the question: is it possible for organelle size scaling to arise if organelle growth is independent of organelle or cell size? Using the yeast vacuole as a model, we tested whether mutants defective in vacuole inheritance, vac8Δ and vac17Δ, tune vacuole biogenesis in response to perturbations in vacuole size. In vac8Δ/vac17Δ, vacuole scaling increases with the replicative age of the cell. Furthermore, vac8Δ/vac17Δ cells continued generating vacuole at roughly constant rates even when they had significantly larger vacuoles compared to wild-type. With support from computational modeling, these results suggest there is no feedback between vacuole biogenesis rates and vacuole or cell size. Rather, size scaling is determined by the relative growth rates of the vacuole and the cell, thus representing a cellular version of Huxley's model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.

  20. State estimation of spatio-temporal phenomena

    NASA Astrophysics Data System (ADS)

    Yu, Dan

    This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.

  1. 0–0 Energies Using Hybrid Schemes: Benchmarks of TD-DFT, CIS(D), ADC(2), CC2, and BSE/GW formalisms for 80 Real-Life Compounds

    PubMed Central

    2015-01-01

    The 0–0 energies of 80 medium and large molecules have been computed with a large panel of theoretical formalisms. We have used an approach computationally tractable for large molecules, that is, the structural and vibrational parameters are obtained with TD-DFT, the solvent effects are accounted for with the PCM model, whereas the total and transition energies have been determined with TD-DFT and with five wave function approaches accounting for contributions from double excitations, namely, CIS(D), ADC(2), CC2, SCS-CC2, and SOS-CC2, as well as Green’s function based BSE/GW approach. Atomic basis sets including diffuse functions have been systematically applied, and several variations of the PCM have been evaluated. Using solvent corrections obtained with corrected linear-response approach, we found that three schemes, namely, ADC(2), CC2, and BSE/GW allow one to reach a mean absolute deviation smaller than 0.15 eV compared to the measurements, the two former yielding slightly better correlation with experiments than the latter. CIS(D), SCS-CC2, and SOS-CC2 provide significantly larger deviations, though the latter approach delivers highly consistent transition energies. In addition, we show that (i) ADC(2) and CC2 values are extremely close to each other but for systems absorbing at low energies; (ii) the linear-response PCM scheme tends to overestimate solvation effects; and that (iii) the average impact of nonequilibrium correction on 0–0 energies is negligible. PMID:26574326

  2. Free energy component analysis for drug design: a case study of HIV-1 protease-inhibitor binding.

    PubMed

    Kalra, P; Reddy, T V; Jayaram, B

    2001-12-06

    A theoretically rigorous and computationally tractable methodology for the prediction of the free energies of binding of protein-ligand complexes is presented. The method formulated involves developing molecular dynamics trajectories of the enzyme, the inhibitor, and the complex, followed by a free energy component analysis that conveys information on the physicochemical forces driving the protein-ligand complex formation and enables an elucidation of drug design principles for a given receptor from a thermodynamic perspective. The complexes of HIV-1 protease with two peptidomimetic inhibitors were taken as illustrative cases. Four-nanosecond-level all-atom molecular dynamics simulations using explicit solvent without any restraints were carried out on the protease-inhibitor complexes and the free proteases, and the trajectories were analyzed via a thermodynamic cycle to calculate the binding free energies. The computed free energies were seen to be in good accord with the reported data. It was noted that the net van der Waals and hydrophobic contributions were favorable to binding while the net electrostatics, entropies, and adaptation expense were unfavorable in these protease-inhibitor complexes. The hydrogen bond between the CH2OH group of the inhibitor at the scissile position and the catalytic aspartate was found to be favorable to binding. Various implicit solvent models were also considered and their shortcomings discussed. In addition, some plausible modifications to the inhibitor residues were attempted, which led to better binding affinities. The generality of the method and the transferability of the protocol with essentially no changes to any other protein-ligand system are emphasized.

  3. Classroom virtual lab experiments as teaching tools for explaining how we understand planetary processes

    NASA Astrophysics Data System (ADS)

    Hill, C. N.; Schools, H.; Research Team Members

    2012-12-01

    This presentation will report on a classroom pilot study in which we teamed with school teachers in four middle school classes to develop and deploy course modules that connect the real-world to virtual forms of laboratory experiments.The broad goal is to help students realize that seemingly complex Earth system processes can be connected to basic properties of the planet and that this can be illustrated through idealized experiment. Specifically the presentation will describe virtual modules based on on-demand cloud computing technologies that allow students to test the notion that pole equator gradients in radiative forcing together with rotation can explain characteristic patterns of flow in the atmosphere. The module developed aligns with new Massachusetts science standard requirements regarding understanding of weather and climate processes. These new standards emphasize an appreciation of differential solar heating and a qualitative understanding of the significance of rotation. In our preliminary classroom pilot studies we employed pre and post evaluation tests to establish that the modules had increased student knowledge of phenomenology and terms. We will describe the results of these tests as well as results from anecdotal measures of student response. This pilot study suggests that one way to help make Earth science concepts more tractable to a wider audience is through virtual experiments that distill phenomena down, but still retain enough detail that students can see the connection to the real world. Modern computer technology and developments in research models appear to provide an opportunity for more work in this area. We will describe some follow-up possibilities that we envisage.

  4. Privacy preserving interactive record linkage (PPIRL)

    PubMed Central

    Kum, Hye-Chung; Krishnamurthy, Ashok; Machanavajjhala, Ashwin; Reiter, Michael K; Ahalt, Stanley

    2014-01-01

    Objective Record linkage to integrate uncoordinated databases is critical in biomedical research using Big Data. Balancing privacy protection against the need for high quality record linkage requires a human–machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic Big Data. Methods In the computer science literature, private record linkage is the most published area. It investigates how to apply a known linkage function safely when linking two tables. However, in practice, the linkage function is rarely known. Thus, there are many data linkage centers whose main role is to be the trusted third party to determine the linkage function manually and link data for research via a master population list for a designated region. Recently, a more flexible computerized third-party linkage platform, Secure Decoupled Linkage (SDLink), has been proposed based on: (1) decoupling data via encryption, (2) obfuscation via chaffing (adding fake data) and universe manipulation; and (3) minimum information disclosure via recoding. Results We synthesize this literature to formalize a new framework for privacy preserving interactive record linkage (PPIRL) with tractable privacy and utility properties and then analyze the literature using this framework. Conclusions Human-based third-party linkage centers for privacy preserving record linkage are the accepted norm internationally. We find that a computer-based third-party platform that can precisely control the information disclosed at the micro level and allow frequent human interaction during the linkage process, is an effective human–machine hybrid system that significantly improves on the linkage center model both in terms of privacy and utility. PMID:24201028

  5. Modeling walker synchronization on the Millennium Bridge.

    PubMed

    Eckhardt, Bruno; Ott, Edward; Strogatz, Steven H; Abrams, Daniel M; McRobie, Allan

    2007-02-01

    On its opening day the London Millennium footbridge experienced unexpected large amplitude wobbling subsequent to the migration of pedestrians onto the bridge. Modeling the stepping of the pedestrians on the bridge as phase oscillators, we obtain a model for the combined dynamics of people and the bridge that is analytically tractable. It provides predictions for the phase dynamics of individual walkers and for the critical number of people for the onset of oscillations. Numerical simulations and analytical estimates reproduce the linear relation between pedestrian force and bridge velocity as observed in experiments. They allow prediction of the amplitude of bridge motion, the rate of relaxation to the synchronized state and the magnitude of the fluctuations due to a finite number of people.

  6. X-ray Raman scattering from molecules and solids in the framework of the Mahan-Nozières-De Dominicis model

    NASA Astrophysics Data System (ADS)

    Privalov, Timofei; Gel'mukhanov, Faris; Ågren, Hans

    2001-10-01

    We have developed a formulation of resonant x-ray Raman scattering of molecules and solids based on the Mahan-Nozières-De Dominicis model. A key step in the formulation is given by a reduction of the Keldysh-Dyson equations for the Green's function to a set of linear algebraic equations. This gave way for a tractable scheme that can be used to analyze the resonant x-ray scattering in the whole time domain. The formalism is used to investigate the role of core-hole relaxation, interference, band filling, detuning, and size of the scattering target. Numerical applications are performed with a one-dimensional tight-binding model.

  7. Flavor instabilities in the neutrino line model

    NASA Astrophysics Data System (ADS)

    Duan, Huaiyu; Shalgar, Shashank

    2015-07-01

    A dense neutrino medium can experience collective flavor oscillations through nonlinear neutrino-neutrino refraction. To make this multi-dimensional flavor transport problem more tractable, all existing studies have assumed certain symmetries (e.g., the spatial homogeneity and directional isotropy in the early universe) to reduce the dimensionality of the problem. In this work we show that, if both the directional and spatial symmetries are not enforced in the neutrino line model, collective oscillations can develop in the physical regimes where the symmetry-preserving oscillation modes are stable. Our results suggest that collective neutrino oscillations in real astrophysical environments (such as core-collapse supernovae and black-hole accretion discs) can be qualitatively different from the predictions based on existing models in which spatial and directional symmetries are artificially imposed.

  8. Developing the anemone Aiptasia as a tractable model for cnidarian-dinoflagellate symbiosis: the transcriptome of aposymbiotic A. pallida.

    PubMed

    Lehnert, Erik M; Burriesci, Matthew S; Pringle, John R

    2012-06-22

    Coral reefs are hotspots of oceanic biodiversity, forming the foundation of ecosystems that are important both ecologically and for their direct practical impacts on humans. Corals are declining globally due to a number of stressors, including rising sea-surface temperatures and pollution; such stresses can lead to a breakdown of the essential symbiotic relationship between the coral host and its endosymbiotic dinoflagellates, a process known as coral bleaching. Although the environmental stresses causing this breakdown are largely known, the cellular mechanisms of symbiosis establishment, maintenance, and breakdown are still largely obscure. Investigating the symbiosis using an experimentally tractable model organism, such as the small sea anemone Aiptasia, should improve our understanding of exactly how the environmental stressors affect coral survival and growth. We assembled the transcriptome of a clonal population of adult, aposymbiotic (dinoflagellate-free) Aiptasia pallida from ~208 million reads, yielding 58,018 contigs. We demonstrated that many of these contigs represent full-length or near-full-length transcripts that encode proteins similar to those from a diverse array of pathways in other organisms, including various metabolic enzymes, cytoskeletal proteins, and neuropeptide precursors. The contigs were annotated by sequence similarity, assigned GO terms, and scanned for conserved protein domains. We analyzed the frequency and types of single-nucleotide variants and estimated the size of the Aiptasia genome to be ~421 Mb. The contigs and annotations are available through NCBI (Transcription Shotgun Assembly database, accession numbers JV077153-JV134524) and at http://pringlelab.stanford.edu/projects.html. The availability of an extensive transcriptome assembly for A. pallida will facilitate analyses of gene-expression changes, identification of proteins of interest, and other studies in this important emerging model system.

  9. Dynamics of West Nile virus evolution in mosquito vectors.

    PubMed

    Grubaugh, Nathan D; Ebel, Gregory D

    2016-12-01

    West Nile virus remains the most common cause of arboviral encephalitis in North America. Since it was introduced, it has undergone adaptive genetic change as it spread throughout the continent. The WNV transmission cycle is relatively tractable in the laboratory. Thus the virus serves as a convenient model system for studying the population biology of mosquito-borne flaviviruses as they undergo transmission to and from mosquitoes and vertebrates. This review summarizes the current knowledge regarding the population dynamics of this virus within mosquito vectors. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Retooling CFD for hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Dwoyer, Douglas L.; Kutler, Paul; Povinelli, Louis A.

    1987-01-01

    The CFD facility requirements of hypersonic aircraft configuration design development are different from those thus far employed for reentry vehicle design, because (1) the airframe and the propulsion system must be fully integrated to achieve the desired performance; (2) the vehicle must be reusable, with minimum refurbishment requirements between flights; and (3) vehicle performance must be optimized for a wide range of Mach numbers. An evaluation is presently made of flow resolution within shock waves, transition and turbulence phenomenon tractability, chemical reaction modeling, and hypersonic boundary layer transition, with state-of-the-art CFD.

  11. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  12. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    NASA Astrophysics Data System (ADS)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope sediment size distributions in landscape evolution models. Overall, this work highlights the need for additional field data sets as well as improved theoretical models, but also demonstrates progress in predicting the size distribution of sediments produced on hillslopes and supplied to channels.

  13. Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution

    NASA Astrophysics Data System (ADS)

    Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.

    2016-12-01

    Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo which not only enables mesh refinement, but also refinement of the model-pore scale or continuum Darcy scale-in a dynamic way such that the appropriate model is used only when and where it is needed. Explicit flux matching provides coupling betwen the scales.

  14. Ranking Specific Sets of Objects.

    PubMed

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  15. Cryptic binding sites on proteins: definition, detection, and druggability.

    PubMed

    Vajda, Sandor; Beglov, Dmitri; Wakefield, Amanda E; Egbert, Megan; Whitty, Adrian

    2018-05-22

    Many proteins in their unbound structures lack surface pockets appropriately sized for drug binding. Hence, a variety of experimental and computational tools have been developed for the identification of cryptic sites that are not evident in the unbound protein but form upon ligand binding, and can provide tractable drug target sites. The goal of this review is to discuss the definition, detection, and druggability of such sites, and their potential value for drug discovery. Novel methods based on molecular dynamics simulations are particularly promising and yield a large number of transient pockets, but it has been shown that only a minority of such sites are generally capable of binding ligands with substantial affinity. Based on recent studies, current methodology can be improved by combining molecular dynamics with fragment docking and machine learning approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. ISMB Conference Funding to Support Attendance of Early Researchers and Students

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaasterland, Terry

    ISMB Conference Funding for Students and Young Scientists Historical Description The Intelligent Systems for Molecular Biology (ISMB) conference has provided a general forum for disseminating the latest developments in bioinformatics on an annual basis for the past 22 years. ISMB is a multidisciplinary conference that brings together scientists from computer science, molecular biology, mathematics and statistics. The goal of the ISMB meeting is to bring together biologists and computational scientists in a focus on actual biological problems, i.e., not simply theoretical calculations. The combined focus on “intelligent systems” and actual biological data makes ISMB a unique and highly important meeting.more » 21 years of experience in holding the conference has resulted in a consistently well-organized, well attended, and highly respected annual conference. "Intelligent systems" include any software which goes beyond straightforward, closed-form algorithms or standard database technologies, and encompasses those that view data in a symbolic fashion, learn from examples, consolidate multiple levels of abstraction, or synthesize results to be cognitively tractable to a human, including the development and application of advanced computational methods for biological problems. Relevant computational techniques include, but are not limited to: machine learning, pattern recognition, knowledge representation, databases, combinatorics, stochastic modeling, string and graph algorithms, linguistic methods, robotics, constraint satisfaction, and parallel computation. Biological areas of interest include molecular structure, genomics, molecular sequence analysis, evolution and phylogenetics, molecular interactions, metabolic pathways, regulatory networks, developmental control, and molecular biology generally. Emphasis is placed on the validation of methods using real data sets, on practical applications in the biological sciences, and on development of novel computational techniques. The ISMB conferences are distinguished from many other conferences in computational biology or artificial intelligence by an insistence that the researchers work with real molecular biology data, not theoretical or toy examples; and from many other biological conferences by providing a forum for technical advances as they occur, which otherwise may be shunned until a firm experimental result is published. The resulting intellectual richness and cross-disciplinary diversity provides an important opportunity for both students and senior researchers. ISMB has become the premier conference series in this field with refereed, published proceedings, establishing an infrastructure to promote the growing body of research.« less

  17. Development of Three-Dimensional Flow Code Package to Predict Performance and Stability of Aircraft with Leading Edge Ice Contamination

    NASA Technical Reports Server (NTRS)

    Strash, D. J.; Summa, J. M.

    1996-01-01

    In the work reported herein, a simplified, uncoupled, zonal procedure is utilized to assess the capability of numerically simulating icing effects on a Boeing 727-200 aircraft. The computational approach combines potential flow plus boundary layer simulations by VSAERO for the un-iced aircraft forces and moments with Navier-Stokes simulations by NPARC for the incremental forces and moments due to iced components. These are compared with wind tunnel force and moment data, supplied by the Boeing Company, examining longitudinal flight characteristics. Grid refinement improved the local flow features over previously reported work with no appreciable difference in the incremental ice effect. The computed lift curve slope with and without empennage ice matches the experimental value to within 1%, and the zero lift angle agrees to within 0.2 of a degree. The computed slope of the un-iced and iced aircraft longitudinal stability curve is within about 2% of the test data. This work demonstrates the feasibility of a zonal method for the icing analysis of complete aircraft or isolated components within the linear angle of attack range. In fact, this zonal technique has allowed for the viscous analysis of a complete aircraft with ice which is currently not otherwise considered tractable.

  18. Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor.

    PubMed

    Reed, Sasha C; Yang, Xiaojuan; Thornton, Peter E

    2015-10-01

    324 I. 324 II. 325 III. 326 IV. 327 328 References 328 SUMMARY: Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate-carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate-carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling efforts and provide suggestions for how to move forward. No claim to original US government works New Phytologist © 2015 New Phytologist Trust.

  19. Stationary moments, diffusion limits, and extinction times for logistic growth with random catastrophes.

    PubMed

    Schlomann, Brandon H

    2018-06-06

    A central problem in population ecology is understanding the consequences of stochastic fluctuations. Analytically tractable models with Gaussian driving noise have led to important, general insights, but they fail to capture rare, catastrophic events, which are increasingly observed at scales ranging from global fisheries to intestinal microbiota. Due to mathematical challenges, growth processes with random catastrophes are less well characterized and it remains unclear how their consequences differ from those of Gaussian processes. In the face of a changing climate and predicted increases in ecological catastrophes, as well as increased interest in harnessing microbes for therapeutics, these processes have never been more relevant. To better understand them, I revisit here a differential equation model of logistic growth coupled to density-independent catastrophes that arrive as a Poisson process, and derive new analytic results that reveal its statistical structure. First, I derive exact expressions for the model's stationary moments, revealing a single effective catastrophe parameter that largely controls low order statistics. Then, I use weak convergence theorems to construct its Gaussian analog in a limit of frequent, small catastrophes, keeping the stationary population mean constant for normalization. Numerically computing statistics along this limit shows how they transform as the dynamics shifts from catastrophes to diffusions, enabling quantitative comparisons. For example, the mean time to extinction increases monotonically by orders of magnitude, demonstrating significantly higher extinction risk under catastrophes than under diffusions. Together, these results provide insight into a wide range of stochastic dynamical systems important for ecology and conservation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Isolated pores dissected from human two-pore channel 2 are functional

    PubMed Central

    Penny, Christopher J.; Rahman, Taufiq; Sula, Altin; Miles, Andrew J.; Wallace, B. A.; Patel, Sandip

    2016-01-01

    Multi-domain voltage-gated ion channels appear to have evolved through sequential rounds of intragenic duplication from a primordial one-domain precursor. Whereas modularity within one-domain symmetrical channels is established, little is known about the roles of individual regions within more complex asymmetrical channels where the domains have undergone substantial divergence. Here we isolated and characterised both of the divergent pore regions from human TPC2, a two-domain channel that holds a key intermediate position in the evolution of voltage-gated ion channels. In HeLa cells, each pore localised to the ER and caused Ca2+ depletion, whereas an ER-targeted pore mutated at a residue that inactivates full-length TPC2 did not. Additionally, one of the pores expressed at high levels in E. coli. When purified, it formed a stable, folded tetramer. Liposomes reconstituted with the pore supported Ca2+ and Na+ uptake that was inhibited by known blockers of full-length channels. Computational modelling of the pore corroborated cationic permeability and drug interaction. Therefore, despite divergence, both pores are constitutively active in the absence of their partners and retain several properties of the wild-type pore. Such symmetrical ‘pore-only’ proteins derived from divergent channel domains may therefore provide tractable tools for probing the functional architecture of complex ion channels. PMID:27941820

  1. Isolated pores dissected from human two-pore channel 2 are functional.

    PubMed

    Penny, Christopher J; Rahman, Taufiq; Sula, Altin; Miles, Andrew J; Wallace, B A; Patel, Sandip

    2016-12-12

    Multi-domain voltage-gated ion channels appear to have evolved through sequential rounds of intragenic duplication from a primordial one-domain precursor. Whereas modularity within one-domain symmetrical channels is established, little is known about the roles of individual regions within more complex asymmetrical channels where the domains have undergone substantial divergence. Here we isolated and characterised both of the divergent pore regions from human TPC2, a two-domain channel that holds a key intermediate position in the evolution of voltage-gated ion channels. In HeLa cells, each pore localised to the ER and caused Ca 2+ depletion, whereas an ER-targeted pore mutated at a residue that inactivates full-length TPC2 did not. Additionally, one of the pores expressed at high levels in E. coli. When purified, it formed a stable, folded tetramer. Liposomes reconstituted with the pore supported Ca 2+ and Na + uptake that was inhibited by known blockers of full-length channels. Computational modelling of the pore corroborated cationic permeability and drug interaction. Therefore, despite divergence, both pores are constitutively active in the absence of their partners and retain several properties of the wild-type pore. Such symmetrical 'pore-only' proteins derived from divergent channel domains may therefore provide tractable tools for probing the functional architecture of complex ion channels.

  2. Coordinated Scheduling for Interdependent Electric Power and Natural Gas Infrastructures

    DOE PAGES

    Zlotnik, Anatoly; Roald, Line; Backhaus, Scott; ...

    2016-03-24

    The extensive installation of gas-fired power plants in many parts of the world has led electric systems to depend heavily on reliable gas supplies. The use of gas-fired generators for peak load and reserve provision causes high intraday variability in withdrawals from high-pressure gas transmission systems. Such variability can lead to gas price fluctuations and supply disruptions that affect electric generator dispatch, electricity prices, and threaten the security of power systems and gas pipelines. These infrastructures function on vastly different spatio-temporal scales, which prevents current practices for separate operations and market clearing from being coordinated. Here in this article, wemore » apply new techniques for control of dynamic gas flows on pipeline networks to examine day-ahead scheduling of electric generator dispatch and gas compressor operation for different levels of integration, spanning from separate forecasting, and simulation to combined optimal control. We formulate multiple coordination scenarios and develop tractable physically accurate computational implementations. These scenarios are compared using an integrated model of test networks for power and gas systems with 24 nodes and 24 pipes, respectively, which are coupled through gas-fired generators. The analysis quantifies the economic efficiency and security benefits of gas-electric coordination and dynamic gas system operation.« less

  3. Non-periodic homogenization of 3-D elastic media for the seismic wave equation

    NASA Astrophysics Data System (ADS)

    Cupillard, Paul; Capdeville, Yann

    2018-05-01

    Because seismic waves have a limited frequency spectrum, the velocity structure of the Earth that can be extracted from seismic records has a limited resolution. As a consequence, one obtains smooth images from waveform inversion, although the Earth holds discontinuities and small scales of various natures. Within the last decade, the non-periodic homogenization method shed light on how seismic waves interact with small geological heterogeneities and `see' upscaled properties. This theory enables us to compute long-wave equivalent density and elastic coefficients of any media, with no constraint on the size, the shape and the contrast of the heterogeneities. In particular, the homogenization leads to the apparent, structure-induced anisotropy. In this paper, we implement this method in 3-D and show 3-D tests for the very first time. The non-periodic homogenization relies on an asymptotic expansion of the displacement and the stress involved in the elastic wave equation. Limiting ourselves to the order 0, we show that the practical computation of an upscaled elastic tensor basically requires (i) to solve an elastostatic problem and (ii) to low-pass filter the strain and the stress associated with the obtained solution. The elastostatic problem consists in finding the displacements due to local unit strains acting in all directions within the medium to upscale. This is solved using a parallel, highly optimized finite-element code. As for the filtering, we rely on the finite-element quadrature to perform the convolution in the space domain. We end up with an efficient numerical tool that we apply on various 3-D models to test the accuracy and the benefit of the homogenization. In the case of a finely layered model, our method agrees with results derived from Backus. In a more challenging model composed by a million of small cubes, waveforms computed in the homogenized medium fit reference waveforms very well. Both direct phases and complex diffracted waves are accurately retrieved in the upscaled model, although it is smooth. Finally, our upscaling method is applied to a realistic geological model. The obtained homogenized medium holds structure-induced anisotropy. Moreover, full seismic wavefields in this medium can be simulated with a coarse mesh (no matter what the numerical solver is), which significantly reduces computation costs usually associated with discontinuities and small heterogeneities. These three tests show that the non-periodic homogenization is both accurate and tractable in large 3-D cases, which opens the path to the correct account of the effect of small-scale features on seismic wave propagation for various applications and to a deeper understanding of the apparent anisotropy.

  4. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  5. The BioC-BioGRID corpus: full text articles annotated for curation of protein–protein and genetic interactions

    PubMed Central

    Kim, Sun; Chatr-aryamontri, Andrew; Chang, Christie S.; Oughtred, Rose; Rust, Jennifer; Wilbur, W. John; Comeau, Donald C.; Dolinski, Kara; Tyers, Mike

    2017-01-01

    A great deal of information on the molecular genetics and biochemistry of model organisms has been reported in the scientific literature. However, this data is typically described in free text form and is not readily amenable to computational analyses. To this end, the BioGRID database systematically curates the biomedical literature for genetic and protein interaction data. This data is provided in a standardized computationally tractable format and includes structured annotation of experimental evidence. BioGRID curation necessarily involves substantial human effort by expert curators who must read each publication to extract the relevant information. Computational text-mining methods offer the potential to augment and accelerate manual curation. To facilitate the development of practical text-mining strategies, a new challenge was organized in BioCreative V for the BioC task, the collaborative Biocurator Assistant Task. This was a non-competitive, cooperative task in which the participants worked together to build BioC-compatible modules into an integrated pipeline to assist BioGRID curators. As an integral part of this task, a test collection of full text articles was developed that contained both biological entity annotations (gene/protein and organism/species) and molecular interaction annotations (protein–protein and genetic interactions (PPIs and GIs)). This collection, which we call the BioC-BioGRID corpus, was annotated by four BioGRID curators over three rounds of annotation and contains 120 full text articles curated in a dataset representing two major model organisms, namely budding yeast and human. The BioC-BioGRID corpus contains annotations for 6409 mentions of genes and their Entrez Gene IDs, 186 mentions of organism names and their NCBI Taxonomy IDs, 1867 mentions of PPIs and 701 annotations of PPI experimental evidence statements, 856 mentions of GIs and 399 annotations of GI evidence statements. The purpose, characteristics and possible future uses of the BioC-BioGRID corpus are detailed in this report. Database URL: http://bioc.sourceforge.net/BioC-BioGRID.html PMID:28077563

  6. Q&A: How do gene regulatory networks control environmental responses in plants?

    PubMed

    Sun, Ying; Dinneny, José R

    2018-04-11

    A gene regulatory network (GRN) describes the hierarchical relationship between transcription factors, associated proteins, and their target genes. Studying GRNs allows us to understand how a plant's genotype and environment are integrated to regulate downstream physiological responses. Current efforts in plants have focused on defining the GRNs that regulate functions such as development and stress response and have been performed primarily in genetically tractable model plant species such as Arabidopsis thaliana. Future studies will likely focus on how GRNs function in non-model plants and change over evolutionary time to allow for adaptation to extreme environments. This broader understanding will inform efforts to engineer GRNs to create tailored crop traits.

  7. Epithelial Patterning, Morphogenesis, and Evolution: Drosophila Eggshell as a Model.

    PubMed

    Osterfield, Miriam; Berg, Celeste A; Shvartsman, Stanislav Y

    2017-05-22

    Understanding the mechanisms driving tissue and organ formation requires knowledge across scales. How do signaling pathways specify distinct tissue types? How does the patterning system control morphogenesis? How do these processes evolve? The Drosophila egg chamber, where EGF and BMP signaling intersect to specify unique cell types that construct epithelial tubes for specialized eggshell structures, has provided a tractable system to ask these questions. Work there has elucidated connections between scales of development, including across evolutionary scales, and fostered the development of quantitative modeling tools. These tools and general principles can be applied to the understanding of other developmental processes across organisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Canadian Association of Neurosciences Review: learning at a snail's pace.

    PubMed

    Parvez, Kashif; Rosenegger, David; Martens, Kara; Orr, Michael; Lukowiak, Ken

    2006-11-01

    While learning and memory are related, they are distinct processes each with different forms of expression and underlying molecular mechanisms. An invertebrate model system, Lymnaea stagnalis, is used to study memory formation of a non-declarative memory. We have done so because: (1) We have discovered the neural circuit that mediates an interesting and tractable behaviour; (2) This behaviour can be operantly conditioned and intermediate-term and long-term memory can be demonstrated; and (3) It is possible to demonstrate that a single neuron in the model system is a necessary site of memory formation. This article reviews how Lymnaea has been used in the study of behavioural and molecular mechanisms underlying consolidation, reconsolidation, extinction and forgetting.

  9. Planetary Dynamos

    NASA Astrophysics Data System (ADS)

    Gaur, Vinod K.

    The article begins with a reference to the first rational approaches to explaining the earth's magnetic field notably Elsasser's application of magneto-hydrodynamics, followed by brief outlines of the characteristics of planetary magnetic fields and of the potentially insightful homopolar dynamo in illuminating the basic issues: theoretical requirements of asymmetry and finite conductivity in sustaining the dynamo process. It concludes with sections on Dynamo modeling and, in particular, the Geo-dynamo, but not before some of the evocative physical processes mediated by the Lorentz force and the behaviour of a flux tube embedded in a perfectly conducting fluid, using Alfvén theorem, are explained, as well as the traditional intermediate approaches to investigating dynamo processes using the more tractable Kinematic models.

  10. Macrophage–Microbe Interactions: Lessons from the Zebrafish Model

    PubMed Central

    Yoshida, Nagisa; Frickel, Eva-Maria; Mostowy, Serge

    2017-01-01

    Macrophages provide front line defense against infections. The study of macrophage–microbe interplay is thus crucial for understanding pathogenesis and infection control. Zebrafish (Danio rerio) larvae provide a unique platform to study macrophage–microbe interactions in vivo, from the level of the single cell to the whole organism. Studies using zebrafish allow non-invasive, real-time visualization of macrophage recruitment and phagocytosis. Furthermore, the chemical and genetic tractability of zebrafish has been central to decipher the complex role of macrophages during infection. Here, we discuss the latest developments using zebrafish models of bacterial and fungal infection. We also review novel aspects of macrophage biology revealed by zebrafish, which can potentiate development of new therapeutic strategies for humans. PMID:29250076

  11. A new approach for developing adjoint models

    NASA Astrophysics Data System (ADS)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and supplies callbacks to compute the action of these operators. The library, called libadjoint, is then capable of symbolically manipulating the forward annotation to automatically assemble the adjoint equations. Libadjoint is open source, and is explicitly designed to be bolted-on to an existing discrete model. It can be applied to any discretisation, steady or time-dependent problems, and both linear and nonlinear systems. Using libadjoint has several advantages. It requires the application of an AD tool only to small pieces of code, making the use of AD far more tractable. As libadjoint derives the adjoint equations, the expertise required to develop an adjoint model is greatly diminished. One major advantage of this approach is that the model developer is freed from implementing complex checkpointing strategies for the adjoint model: libadjoint has sufficient information about the forward model to re-play the entire forward solve when necessary, and thus the checkpointing algorithm can be implemented entirely within the library itself. Examples are shown using the Fluidity/ICOM framework, a complex ocean model under development at Imperial College London.

  12. Micropublications: a semantic model for claims, evidence, arguments and annotations in biomedical communications

    PubMed Central

    2014-01-01

    Background Scientific publications are documentary representations of defeasible arguments, supported by data and repeatable methods. They are the essential mediating artifacts in the ecosystem of scientific communications. The institutional “goal” of science is publishing results. The linear document publication format, dating from 1665, has survived transition to the Web. Intractable publication volumes; the difficulty of verifying evidence; and observed problems in evidence and citation chains suggest a need for a web-friendly and machine-tractable model of scientific publications. This model should support: digital summarization, evidence examination, challenge, verification and remix, and incremental adoption. Such a model must be capable of expressing a broad spectrum of representational complexity, ranging from minimal to maximal forms. Results The micropublications semantic model of scientific argument and evidence provides these features. Micropublications support natural language statements; data; methods and materials specifications; discussion and commentary; challenge and disagreement; as well as allowing many kinds of statement formalization. The minimal form of a micropublication is a statement with its attribution. The maximal form is a statement with its complete supporting argument, consisting of all relevant evidence, interpretations, discussion and challenges brought forward in support of or opposition to it. Micropublications may be formalized and serialized in multiple ways, including in RDF. They may be added to publications as stand-off metadata. An OWL 2 vocabulary for micropublications is available at http://purl.org/mp. A discussion of this vocabulary along with RDF examples from the case studies, appears as OWL Vocabulary and RDF Examples in Additional file 1. Conclusion Micropublications, because they model evidence and allow qualified, nuanced assertions, can play essential roles in the scientific communications ecosystem in places where simpler, formalized and purely statement-based models, such as the nanopublications model, will not be sufficient. At the same time they will add significant value to, and are intentionally compatible with, statement-based formalizations. We suggest that micropublications, generated by useful software tools supporting such activities as writing, editing, reviewing, and discussion, will be of great value in improving the quality and tractability of biomedical communications. PMID:26261718

  13. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  14. Generalizing Backtrack-Free Search: A Framework for Search-Free Constraint Satisfaction

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy

    2000-01-01

    Tractable classes of constraint satisfaction problems are of great importance in artificial intelligence. Identifying and taking advantage of such classes can significantly speed up constraint problem solving. In addition, tractable classes are utilized in applications where strict worst-case performance guarantees are required, such as constraint-based plan execution. In this work, we present a formal framework for search-free (backtrack-free) constraint satisfaction. The framework is based on general procedures, rather than specific propagation techniques, and thus generalizes existing techniques in this area. We also relate search-free problem solving to the notion of decision sets and use the result to provide a constructive criterion that is sufficient to guarantee search-free problem solving.

  15. INVESTIGATING DIFFERENCES IN BRAIN FUNCTIONAL NETWORKS USING HIERARCHICAL COVARIATE-ADJUSTED INDEPENDENT COMPONENT ANALYSIS.

    PubMed

    Shi, Ran; Guo, Ying

    2016-12-01

    Human brains perform tasks via complex functional networks consisting of separated brain regions. A popular approach to characterize brain functional networks in fMRI studies is independent component analysis (ICA), which is a powerful method to reconstruct latent source signals from their linear mixtures. In many fMRI studies, an important goal is to investigate how brain functional networks change according to specific clinical and demographic variabilities. Existing ICA methods, however, cannot directly incorporate covariate effects in ICA decomposition. Heuristic post-ICA analysis to address this need can be inaccurate and inefficient. In this paper, we propose a hierarchical covariate-adjusted ICA (hc-ICA) model that provides a formal statistical framework for estimating covariate effects and testing differences between brain functional networks. Our method provides a more reliable and powerful statistical tool for evaluating group differences in brain functional networks while appropriately controlling for potential confounding factors. We present an analytically tractable EM algorithm to obtain maximum likelihood estimates of our model. We also develop a subspace-based approximate EM that runs significantly faster while retaining high accuracy. To test the differences in functional networks, we introduce a voxel-wise approximate inference procedure which eliminates the need of computationally expensive covariance matrix estimation and inversion. We demonstrate the advantages of our methods over the existing method via simulation studies. We apply our method to an fMRI study to investigate differences in brain functional networks associated with post-traumatic stress disorder (PTSD).

  16. Sepsis reconsidered: Identifying novel metrics for behavioral landscape characterization with a high-performance computing implementation of an agent-based model.

    PubMed

    Cockrell, Chase; An, Gary

    2017-10-07

    Sepsis affects nearly 1 million people in the United States per year, has a mortality rate of 28-50% and requires more than $20 billion a year in hospital costs. Over a quarter century of research has not yielded a single reliable diagnostic test or a directed therapeutic agent for sepsis. Central to this insufficiency is the fact that sepsis remains a clinical/physiological diagnosis representing a multitude of molecularly heterogeneous pathological trajectories. Advances in computational capabilities offered by High Performance Computing (HPC) platforms call for an evolution in the investigation of sepsis to attempt to define the boundaries of traditional research (bench, clinical and computational) through the use of computational proxy models. We present a novel investigatory and analytical approach, derived from how HPC resources and simulation are used in the physical sciences, to identify the epistemic boundary conditions of the study of clinical sepsis via the use of a proxy agent-based model of systemic inflammation. Current predictive models for sepsis use correlative methods that are limited by patient heterogeneity and data sparseness. We address this issue by using an HPC version of a system-level validated agent-based model of sepsis, the Innate Immune Response ABM (IIRBM), as a proxy system in order to identify boundary conditions for the possible behavioral space for sepsis. We then apply advanced analysis derived from the study of Random Dynamical Systems (RDS) to identify novel means for characterizing system behavior and providing insight into the tractability of traditional investigatory methods. The behavior space of the IIRABM was examined by simulating over 70 million sepsis patients for up to 90 days in a sweep across the following parameters: cardio-respiratory-metabolic resilience; microbial invasiveness; microbial toxigenesis; and degree of nosocomial exposure. In addition to using established methods for describing parameter space, we developed two novel methods for characterizing the behavior of a RDS: Probabilistic Basins of Attraction (PBoA) and Stochastic Trajectory Analysis (STA). Computationally generated behavioral landscapes demonstrated attractor structures around stochastic regions of behavior that could be described in a complementary fashion through use of PBoA and STA. The stochasticity of the boundaries of the attractors highlights the challenge for correlative attempts to characterize and classify clinical sepsis. HPC simulations of models like the IIRABM can be used to generate approximations of the behavior space of sepsis to both establish "boundaries of futility" with respect to existing investigatory approaches and apply system engineering principles to investigate the general dynamic properties of sepsis to provide a pathway for developing control strategies. The issues that bedevil the study and treatment of sepsis, namely clinical data sparseness and inadequate experimental sampling of system behavior space, are fundamental to nearly all biomedical research, manifesting in the "Crisis of Reproducibility" at all levels. HPC-augmented simulation-based research offers an investigatory strategy more consistent with that seen in the physical sciences (which combine experiment, theory and simulation), and an opportunity to utilize the leading advances in HPC, namely deep machine learning and evolutionary computing, to form the basis of an iterative scientific process to meet the full promise of Precision Medicine (right drug, right patient, right time). Copyright © 2017. Published by Elsevier Ltd.

  17. Impact of Calibrated Land Surface Model Parameters on the Accuracy and Uncertainty of Land-Atmosphere Coupling in WRF Simulations

    NASA Technical Reports Server (NTRS)

    Santanello, Joseph A., Jr.; Kumar, Sujay V.; Peters-Lidard, Christa D.; Harrison, Ken; Zhou, Shujia

    2012-01-01

    Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface temperature and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty estimation module in NASA's Land Information System (LIS-OPT/UE), whereby parameter sets are calibrated in the Noah land surface model and classified according to a land cover and soil type mapping of the observation sites to the full model domain. The impact of calibrated parameters on the a) spinup of the land surface used as initial conditions, and b) heat and moisture states and fluxes of the coupled WRF simulations are then assessed in terms of ambient weather and land-atmosphere coupling along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Finally, tradeoffs of computational tractability and scientific validity, and the potential for combining this approach with satellite remote sensing data are also discussed.

  18. An Analysis of Waves Underlying Grid Cell Firing in the Medial Enthorinal Cortex.

    PubMed

    Bonilla-Quintana, Mayte; Wedgwood, Kyle C A; O'Dea, Reuben D; Coombes, Stephen

    2017-08-25

    Layer II stellate cells in the medial enthorinal cortex (MEC) express hyperpolarisation-activated cyclic-nucleotide-gated (HCN) channels that allow for rebound spiking via an [Formula: see text] current in response to hyperpolarising synaptic input. A computational modelling study by Hasselmo (Philos. Trans. R. Soc. Lond. B, Biol. Sci. 369:20120523, 2013) showed that an inhibitory network of such cells can support periodic travelling waves with a period that is controlled by the dynamics of the [Formula: see text] current. Hasselmo has suggested that these waves can underlie the generation of grid cells, and that the known difference in [Formula: see text] resonance frequency along the dorsal to ventral axis can explain the observed size and spacing between grid cell firing fields. Here we develop a biophysical spiking model within a framework that allows for analytical tractability. We combine the simplicity of integrate-and-fire neurons with a piecewise linear caricature of the gating dynamics for HCN channels to develop a spiking neural field model of MEC. Using techniques primarily drawn from the field of nonsmooth dynamical systems we show how to construct periodic travelling waves, and in particular the dispersion curve that determines how wave speed varies as a function of period. This exhibits a wide range of long wavelength solutions, reinforcing the idea that rebound spiking is a candidate mechanism for generating grid cell firing patterns. Importantly we develop a wave stability analysis to show how the maximum allowed period is controlled by the dynamical properties of the [Formula: see text] current. Our theoretical work is validated by numerical simulations of the spiking model in both one and two dimensions.

  19. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Forecasting influenza in Hong Kong with Google search queries and statistical model fusion

    PubMed Central

    Ramirez Ramirez, L. Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung

    2017-01-01

    Background The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Methods Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. Results DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. Conclusions The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient. PMID:28464015

Top