Sample records for deterministic mechanical model

  1. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  2. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    NASA Astrophysics Data System (ADS)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  3. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  4. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  5. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  6. Role of demographic stochasticity in a speciation model with sexual reproduction

    NASA Astrophysics Data System (ADS)

    Lafuerza, Luis F.; McKane, Alan J.

    2016-03-01

    Recent theoretical studies have shown that demographic stochasticity can greatly increase the tendency of asexually reproducing phenotypically diverse organisms to spontaneously evolve into localized clusters, suggesting a simple mechanism for sympatric speciation. Here we study the role of demographic stochasticity in a model of competing organisms subject to assortative mating. We find that in models with sexual reproduction, noise can also lead to the formation of phenotypic clusters in parameter ranges where deterministic models would lead to a homogeneous distribution. In some cases, noise can have a sizable effect, rendering the deterministic modeling insufficient to understand the phenotypic distribution.

  7. Distinct Sources of Deterministic and Stochastic Components of Action Timing Decisions in Rodent Frontal Cortex.

    PubMed

    Murakami, Masayoshi; Shteingart, Hanan; Loewenstein, Yonatan; Mainen, Zachary F

    2017-05-17

    The selection and timing of actions are subject to determinate influences such as sensory cues and internal state as well as to effectively stochastic variability. Although stochastic choice mechanisms are assumed by many theoretical models, their origin and mechanisms remain poorly understood. Here we investigated this issue by studying how neural circuits in the frontal cortex determine action timing in rats performing a waiting task. Electrophysiological recordings from two regions necessary for this behavior, medial prefrontal cortex (mPFC) and secondary motor cortex (M2), revealed an unexpected functional dissociation. Both areas encoded deterministic biases in action timing, but only M2 neurons reflected stochastic trial-by-trial fluctuations. This differential coding was reflected in distinct timescales of neural dynamics in the two frontal cortical areas. These results suggest a two-stage model in which stochastic components of action timing decisions are injected by circuits downstream of those carrying deterministic bias signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  9. Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.

    PubMed

    Gomez, Christophe; Hartung, Niklas

    2018-01-01

    Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.

  10. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  11. Phase space flow of particles in squeezed states

    NASA Technical Reports Server (NTRS)

    Ceperley, Peter H.

    1994-01-01

    The manipulation of noise and uncertainty in squeezed states is governed by the wave nature of the quantum mechanical particles in these states. This paper uses a deterministic model of quantum mechanics in which real guiding waves control the flow of localized particles. This model will be used to examine the phase space flow of particles in typical squeezed states.

  12. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    EPA Science Inventory

    Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the f...

  13. Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks

    PubMed Central

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-01-01

    The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability. PMID:28937632

  14. Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks.

    PubMed

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-09-22

    The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability.

  15. A solution to the biodiversity paradox by logical deterministic cellular automata.

    PubMed

    Kalmykov, Lev V; Kalmykov, Vyacheslav L

    2015-06-01

    The paradox of biological diversity is the key problem of theoretical ecology. The paradox consists in the contradiction between the competitive exclusion principle and the observed biodiversity. The principle is important as the basis for ecological theory. On a relatively simple model we show a mechanism of indefinite coexistence of complete competitors which violates the known formulations of the competitive exclusion principle. This mechanism is based on timely recovery of limiting resources and their spatio-temporal allocation between competitors. Because of limitations of the black-box modeling there was a problem to formulate the exclusion principle correctly. Our white-box multiscale model of two-species competition is based on logical deterministic individual-based cellular automata. This approach provides an automatic deductive inference on the basis of a system of axioms, and gives a direct insight into mechanisms of the studied system. It is one of the most promising methods of artificial intelligence. We reformulate and generalize the competitive exclusion principle and explain why this formulation provides a solution of the biodiversity paradox. In addition, we propose a principle of competitive coexistence.

  16. Deterministic Impulsive Vacuum Foundations for Quantum-Mechanical Wavefunctions

    NASA Astrophysics Data System (ADS)

    Valentine, John S.

    2013-09-01

    By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model's deterministic foundations.

  17. A deterministic model predicts the properties of stochastic calcium oscillations in airway smooth muscle cells.

    PubMed

    Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James

    2014-08-01

    The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.

  18. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  19. Modeling small cell lung cancer (SCLC) biology through deterministic and stochastic mathematical models.

    PubMed

    Salgia, Ravi; Mambetsariev, Isa; Hewelt, Blake; Achuthan, Srisairam; Li, Haiqing; Poroyko, Valeriy; Wang, Yingyu; Sattler, Martin

    2018-05-25

    Mathematical cancer models are immensely powerful tools that are based in part on the fractal nature of biological structures, such as the geometry of the lung. Cancers of the lung provide an opportune model to develop and apply algorithms that capture changes and disease phenotypes. We reviewed mathematical models that have been developed for biological sciences and applied them in the context of small cell lung cancer (SCLC) growth, mutational heterogeneity, and mechanisms of metastasis. The ultimate goal is to develop the stochastic and deterministic nature of this disease, to link this comprehensive set of tools back to its fractalness and to provide a platform for accurate biomarker development. These techniques may be particularly useful in the context of drug development research, such as combination with existing omics approaches. The integration of these tools will be important to further understand the biology of SCLC and ultimately develop novel therapeutics.

  20. A stochastic spatiotemporal model of a response-regulator network in the Caulobacter crescentus cell cycle

    NASA Astrophysics Data System (ADS)

    Li, Fei; Subramanian, Kartik; Chen, Minghan; Tyson, John J.; Cao, Yang

    2016-06-01

    The asymmetric cell division cycle in Caulobacter crescentus is controlled by an elaborate molecular mechanism governing the production, activation and spatial localization of a host of interacting proteins. In previous work, we proposed a deterministic mathematical model for the spatiotemporal dynamics of six major regulatory proteins. In this paper, we study a stochastic version of the model, which takes into account molecular fluctuations of these regulatory proteins in space and time during early stages of the cell cycle of wild-type Caulobacter cells. We test the stochastic model with regard to experimental observations of increased variability of cycle time in cells depleted of the divJ gene product. The deterministic model predicts that overexpression of the divK gene blocks cell cycle progression in the stalked stage; however, stochastic simulations suggest that a small fraction of the mutants cells do complete the cell cycle normally.

  1. Identification of gene regulation models from single-cell data

    NASA Astrophysics Data System (ADS)

    Weber, Lisa; Raymond, William; Munsky, Brian

    2018-09-01

    In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.

  2. Non Kolmogorov Probability Models Outside Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi

    2009-03-01

    This paper is devoted to analysis of main conceptual problems in the interpretation of QM: reality, locality, determinism, physical state, Heisenberg principle, "deterministic" and "exact" theories, laws of chance, notion of event, statistical invariants, adaptive realism, EPR correlations and, finally, the EPR-chameleon experiment.

  3. Modeling heterogeneous responsiveness of intrinsic apoptosis pathway

    PubMed Central

    2013-01-01

    Background Apoptosis is a cell suicide mechanism that enables multicellular organisms to maintain homeostasis and to eliminate individual cells that threaten the organism’s survival. Dependent on the type of stimulus, apoptosis can be propagated by extrinsic pathway or intrinsic pathway. The comprehensive understanding of the molecular mechanism of apoptotic signaling allows for development of mathematical models, aiming to elucidate dynamical and systems properties of apoptotic signaling networks. There have been extensive efforts in modeling deterministic apoptosis network accounting for average behavior of a population of cells. Cellular networks, however, are inherently stochastic and significant cell-to-cell variability in apoptosis response has been observed at single cell level. Results To address the inevitable randomness in the intrinsic apoptosis mechanism, we develop a theoretical and computational modeling framework of intrinsic apoptosis pathway at single-cell level, accounting for both deterministic and stochastic behavior. Our deterministic model, adapted from the well-accepted Fussenegger model, shows that an additional positive feedback between the executioner caspase and the initiator caspase plays a fundamental role in yielding the desired property of bistability. We then examine the impact of intrinsic fluctuations of biochemical reactions, viewed as intrinsic noise, and natural variation of protein concentrations, viewed as extrinsic noise, on behavior of the intrinsic apoptosis network. Histograms of the steady-state output at varying input levels show that the intrinsic noise could elicit a wider region of bistability over that of the deterministic model. However, the system stochasticity due to intrinsic fluctuations, such as the noise of steady-state response and the randomness of response delay, shows that the intrinsic noise in general is insufficient to produce significant cell-to-cell variations at physiologically relevant level of molecular numbers. Furthermore, the extrinsic noise represented by random variations of two key apoptotic proteins, namely Cytochrome C and inhibitor of apoptosis proteins (IAP), is modeled separately or in combination with intrinsic noise. The resultant stochasticity in the timing of intrinsic apoptosis response shows that the fluctuating protein variations can induce cell-to-cell stochastic variability at a quantitative level agreeing with experiments. Finally, simulations illustrate that the mean abundance of fluctuating IAP protein is positively correlated with the degree of cellular stochasticity of the intrinsic apoptosis pathway. Conclusions Our theoretical and computational study shows that the pronounced non-genetic heterogeneity in intrinsic apoptosis responses among individual cells plausibly arises from extrinsic rather than intrinsic origin of fluctuations. In addition, it predicts that the IAP protein could serve as a potential therapeutic target for suppression of the cell-to-cell variation in the intrinsic apoptosis responsiveness. PMID:23875784

  4. The Constitutive Modeling of Thin Films with Randon Material Wrinkles

    NASA Technical Reports Server (NTRS)

    Murphey, Thomas W.; Mikulas, Martin M.

    2001-01-01

    Material wrinkles drastically alter the structural constitutive properties of thin films. Normally linear elastic materials, when wrinkled, become highly nonlinear and initially inelastic. Stiffness' reduced by 99% and negative Poisson's ratios are typically observed. This paper presents an effective continuum constitutive model for the elastic effects of material wrinkles in thin films. The model considers general two-dimensional stress and strain states (simultaneous bi-axial and shear stress/strain) and neglects out of plane bending. The constitutive model is derived from a traditional mechanics analysis of an idealized physical model of random material wrinkles. Model parameters are the directly measurable wrinkle characteristics of amplitude and wavelength. For these reasons, the equations are mechanistic and deterministic. The model is compared with bi-axial tensile test data for wrinkled Kaptong(Registered Trademark) HN and is shown to deterministically predict strain as a function of stress with an average RMS error of 22%. On average, fitting the model to test data yields an RMS error of 1.2%

  5. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  6. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  7. Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network

    PubMed Central

    Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen

    2015-01-01

    Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. PMID:26714277

  8. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  9. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  10. Detailed numerical investigation of the dissipative stochastic mechanics based neuron model.

    PubMed

    Güler, Marifi

    2008-10-01

    Recently, a physical approach for the description of neuronal dynamics under the influence of ion channel noise was proposed in the realm of dissipative stochastic mechanics (Güler, Phys Rev E 76:041918, 2007). Led by the presence of a multiple number of gates in an ion channel, the approach establishes a viewpoint that ion channels are exposed to two kinds of noise: the intrinsic noise, associated with the stochasticity in the movement of gating particles between the inner and the outer faces of the membrane, and the topological noise, associated with the uncertainty in accessing the permissible topological states of open gates. Renormalizations of the membrane capacitance and of a membrane voltage dependent potential function were found to arise from the mutual interaction of the two noisy systems. The formalism therein was scrutinized using a special membrane with some tailored properties giving the Rose-Hindmarsh dynamics in the deterministic limit. In this paper, the resultant computational neuron model of the above approach is investigated in detail numerically for its dynamics using time-independent input currents. The following are the major findings obtained. The intrinsic noise gives rise to two significant coexisting effects: it initiates spiking activity even in some range of input currents for which the corresponding deterministic model is quiet and causes bursting in some other range of input currents for which the deterministic model fires tonically. The renormalization corrections are found to augment the above behavioral transitions from quiescence to spiking and from tonic firing to bursting, and, therefore, the bursting activity is found to take place in a wider range of input currents for larger values of the correction coefficients. Some findings concerning the diffusive behavior in the voltage space are also reported.

  11. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  12. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  13. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  14. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  15. Unifying Complexity and Information

    NASA Astrophysics Data System (ADS)

    Ke, Da-Guan

    2013-04-01

    Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.

  16. Nonlinear unitary quantum collapse model with self-generated noise

    NASA Astrophysics Data System (ADS)

    Geszti, Tamás

    2018-04-01

    Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.

  17. The role of the therapeutic relationship in the association between interpersonal behaviors and outcome: Comparison of two competing models.

    PubMed

    Dolev, Tohar; Zilcha-Mano, Sigal

    2018-01-08

    The patient-therapist relationship may be the mechanism behind the effect of pretreatment interpersonal patient behaviors on the outcome of psychotherapy for depression, or the factor determining for whom interpersonal behaviors affect outcome. We seek to establish which of these two alternatives receives empirical support. We conducted a secondary analysis of the findings from the Treatment for Depression Collaborative Research Program to examine two alternative models. First, a deterministic model, in which clients' ability to create satisfactory interpersonal relationships affects their ability to build a strong therapeutic relationship, which in turn affects outcome; and second, a compensation model, in which patients in a treatment focusing on interpersonal mechanisms of change and not in placebo, who compensate for their maladaptive pretreatment interpersonal behaviors by building a strong therapeutic relationship, benefit from treatment more than do patients who cannot build such relationship. The compensation, rather than the deterministic model, was supported, suggesting that the interpersonal behavior-outcome association is significantly moderated by the therapeutic relationship in interpersonal psychotherapy and not in placebo. Findings support an optimistic view whereby patients seeking treatment for maladaptive interpersonal behaviors can achieve good outcomes if work on interpersonal relationships is conducted in the presence of a strong therapeutic relationship.

  18. Who's flying the plane: serotonin levels, aggression and free will.

    PubMed

    Siegel, Allan; Douard, John

    2011-01-01

    The present paper addresses the philosophical problem raised by current causal neurochemical models of impulsive violence and aggression: to what extent can we hold violent criminal offenders responsible for their conduct if that conduct is the result of deterministic biochemical processes in the brain. This question is currently receiving a great deal of attention among neuroscientists, legal scholars and philosophers. We examine our current knowledge of neuroscience to assess the possible roles of deterministic factors which induce impulsive aggression, and the extent to which this behavior can be controlled by neural conditioning mechanisms. Neural conditioning mechanisms, we suggest, may underlie what we consider the basis of responsible (though not necessarily moral) behavior: the capacity to give and take reasons. The models we first examine are based in part upon the role played by the neurotransmitter, serotonin, in the regulation of violence and aggression. Collectively, these results would appear to argue in favor of the view that low brain serotonin levels induce impulsive aggression which overrides mechanisms related to rational decision making processes. We next present an account of responsibility as based on the capacity to exercise a certain kind of reason-responsive control over one's conduct. The problem with such accounts of responsibility, however, is that they fail to specify a neurobiological realization of such mechanisms of control. We present a neurobiological, and weakly determinist, framework for understanding how persons can exercise guidance control over their conduct. This framework is based upon classical conditioning of neurons in the prefrontal cortex that allow for a decision making mechanism that provides for prefrontal cortical control of the sites in the brain which express aggressive behavior that include the hypothalamus and midbrain periaqueductal gray. The authors support the view that, in many circumstances, neural conditioning mechanisms provide the basis for the control of human aggression in spite of the presence of brain serotonin levels that might otherwise favor the expression of impulsive aggressive behavior. Indeed if those neural conditioning mechanisms underlie the human capacity to exercise control, they may be the neural realization of reason-responsiveness generally. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Who's flying the plane: Serotonin levels, aggression and free will

    PubMed Central

    Siegel, Allan; Douard, John

    2010-01-01

    The present paper addresses the philosophical problem raised by current causal neurochemical models of impulsive violence and aggression: to what extent can we hold violent criminal offenders responsible for their conduct if that conduct is the result of deterministic biochemical processes in the brain. This question is currently receiving a great deal of attention among neuroscientists, legal scholars and philosophers. We examine our current knowledge of neuroscience to assess the possible roles of deterministic factors which induce impulsive aggression, and the extent to which this behavior can be controlled by neural conditioning mechanisms. Neural conditioning mechanisms, we suggest, may underlie what we consider the basis of responsible (though not necessarily moral) behavior: the capacity to give and take reasons. The models we first examine are based in part upon the role played by the neurotransmitter, serotonin, in the regulation of violence and aggression. Collectively, these results would appear to argue in favor of the view that low brain serotonin levels induce impulsive aggression which overrides mechanisms related to rational decision making processes. We next present an account of responsibility as based on the capacity to exercise a certain kind of reason-responsive control over one's conduct. The problem with such accounts of responsibility, however, is that they fail to specify a neurobiological realization of such mechanisms of control. We present a neurobiological, and weakly determinist, framework for understanding how persons can exercise guidance control over their conduct. This framework is based upon classical conditioning of neurons in the prefrontal cortex that allow for a decision making mechanism that provides for prefrontal cortical control of the sites in the brain which express aggressive behavior that include the hypothalamus and midbrain periaqueductal gray. The authors support the view that, in many circumstances, neural conditioning mechanisms provide the basis for the control of human aggression in spite of the presence of brain serotonin levels that might otherwise favor the expression of impulsive aggressive behavior. Indeed if those neural conditioning mechanisms underlie the human capacity to exercise control, they may be the neural realization of reason-responsiveness generally. PMID:21112635

  20. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  1. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  2. Effects of delay and noise in a negative feedback regulatory motif

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo; Dies, Marta

    2009-03-01

    The small copy number of the molecules involved in gene regulation can induce nontrivial stochastic phenomena such as noise-induced oscillations. An often neglected aspect of regulation dynamics are the delays involved in transcription and translation. Delays introduce analytical and computational complications because the dynamics is non-Markovian. We study the interplay of noise and delays in a negative feedback model of the p53 core regulatory network. Recent experiments have found pronounced oscillations in the concentrations of proteins p53 and Mdm2 in individual cells subjected to DNA damage. Similar oscillations occur in the Hes-1 and NK-kB systems, and in circadian rhythms. Several mechanisms have been proposed to explain this oscillatory behaviour, such as deterministic limit cycles, with and without delay, or noise-induced excursions in excitable models. We consider a generic delayed Master Equation incorporating the activation of Mdm2 by p53 and the Mdm2-promoted degradation of p53. In the deterministic limit and for large delays, the model shows a Hopf bifurcation. Via exact stochastic simulations, we find strong noise-induced oscillations well outside the limit-cycle region. We propose that this may be a generic mechanism for oscillations in gene regulatory systems.

  3. Deterministic Creation of Macroscopic Cat States

    PubMed Central

    Lombardo, Daniel; Twamley, Jason

    2015-01-01

    Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157

  4. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  5. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  6. Stochastic Petri Net extension of a yeast cell cycle model.

    PubMed

    Mura, Ivan; Csikász-Nagy, Attila

    2008-10-21

    This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.

  7. A white-box model of S-shaped and double S-shaped single-species population growth

    PubMed Central

    Kalmykov, Lev V.

    2015-01-01

    Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717

  8. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    NASA Astrophysics Data System (ADS)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.

  9. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome

    DOE PAGES

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; ...

    2018-04-12

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less

  10. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome.

    PubMed

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; Veach, Allison; Ialonardi, Florencia; Iribarne, Oscar; Silliman, Brian

    2018-06-01

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a null model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. Furthermore, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization. © 2018 by the Ecological Society of America.

  11. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less

  12. Synchrony and entrainment properties of robust circadian oscillators

    PubMed Central

    Bagheri, Neda; Taylor, Stephanie R.; Meeker, Kirsten; Petzold, Linda R.; Doyle, Francis J.

    2008-01-01

    Systems theoretic tools (i.e. mathematical modelling, control, and feedback design) advance the understanding of robust performance in complex biological networks. We highlight phase entrainment as a key performance measure used to investigate dynamics of a single deterministic circadian oscillator for the purpose of generating insight into the behaviour of a population of (synchronized) oscillators. More specifically, the analysis of phase characteristics may facilitate the identification of appropriate coupling mechanisms for the ensemble of noisy (stochastic) circadian clocks. Phase also serves as a critical control objective to correct mismatch between the biological clock and its environment. Thus, we introduce methods of investigating synchrony and entrainment in both stochastic and deterministic frameworks, and as a property of a single oscillator or population of coupled oscillators. PMID:18426774

  13. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  14. Cognitive Diagnostic Analysis Using Hierarchically Structured Skills

    ERIC Educational Resources Information Center

    Su, Yu-Lan

    2013-01-01

    This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…

  15. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  16. Noise-induced symmetry breaking far from equilibrium and the emergence of biological homochirality

    NASA Astrophysics Data System (ADS)

    Jafarpour, Farshid; Biancalani, Tommaso; Goldenfeld, Nigel

    2017-03-01

    The origin of homochirality, the observed single-handedness of biological amino acids and sugars, has long been attributed to autocatalysis, a frequently assumed precursor for early life self-replication. However, the stability of homochiral states in deterministic autocatalytic systems relies on cross-inhibition of the two chiral states, an unlikely scenario for early life self-replicators. Here we present a theory for a stochastic individual-level model of autocatalytic prebiotic self-replicators that are maintained out of thermal equilibrium. Without chiral inhibition, the racemic state is the global attractor of the deterministic dynamics, but intrinsic multiplicative noise stabilizes the homochiral states. Moreover, we show that this noise-induced bistability is robust with respect to diffusion of molecules of opposite chirality, and systems of diffusively coupled autocatalytic chemical reactions synchronize their final homochiral states when the self-replication is the dominant production mechanism for the chiral molecules. We conclude that nonequilibrium autocatalysis is a viable mechanism for homochirality, without imposing additional nonlinearities such as chiral inhibition.

  17. Deterministic particle transport in a ratchet flow.

    PubMed

    Beltrame, Philippe; Makhoul, Mounia; Joelson, Maminirina

    2016-01-01

    This study is motivated by the issue of the pumping of particle through a periodic modulated channel. We focus on a simplified deterministic model of small inertia particles within the Stokes flow framework that we call "ratchet flow." A path-following method is employed in the parameter space in order to retrace the scenario which from bounded periodic solutions leads to particle transport. Depending on whether the magnitude of the particle drag is moderate or large, two main transport mechanisms are identified in which the role of the parity symmetry of the flow differs. For large drag, transport is induced by flow asymmetry, while for moderate drag, since the full transport solution bifurcation structure already exists for symmetric settings, flow asymmetry only makes the transport effective. We analyzed the scenarios of current reversals for each mechanism as well as the role of synchronization. In particular we show that, for large drag, the particle drift is similar to phase slip in a synchronization problem.

  18. Monte Carlo Study Elucidates the Type 1/Type 2 Choice in Apoptotic Death Signaling in Healthy and Cancer Cells

    PubMed Central

    Raychaudhuri, Subhadip; Raychaudhuri, Somkanya C

    2013-01-01

    Apoptotic cell death is coordinated through two distinct (type 1 and type 2) intracellular signaling pathways. How the type 1/type 2 choice is made remains a central problem in the biology of apoptosis and has implications for apoptosis related diseases and therapy. We study the problem of type 1/type 2 choice in silico utilizing a kinetic Monte Carlo model of cell death signaling. Our results show that the type 1/type 2 choice is linked to deterministic versus stochastic cell death activation, elucidating a unique regulatory control of the apoptotic pathways. Consistent with previous findings, our results indicate that caspase 8 activation level is a key regulator of the choice between deterministic type 1 and stochastic type 2 pathways, irrespective of cell types. Expression levels of signaling molecules downstream also regulate the type 1/type 2 choice. A simplified model of DISC clustering elucidates the mechanism of increased active caspase 8 generation and type 1 activation in cancer cells having increased sensitivity to death receptor activation. We demonstrate that rapid deterministic activation of the type 1 pathway can selectively target such cancer cells, especially if XIAP is also inhibited; while inherent cell-to-cell variability would allow normal cells stay protected. PMID:24709706

  19. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    PubMed Central

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  20. Chemo-mechanical modeling of tumor growth in elastic epithelial tissue

    NASA Astrophysics Data System (ADS)

    Bratsun, Dmitry A.; Zakharov, Andrey P.; Pismen, Len

    2016-08-01

    We propose a multiscale chemo-mechanical model of the cancer tumor development in the epithelial tissue. The epithelium is represented by an elastic 2D array of polygonal cells with its own gene regulation dynamics. The model allows the simulation of the evolution of multiple cells interacting via the chemical signaling or mechanically induced strain. The algorithm includes the division and intercalation of cells as well as the transformation of normal cells into a cancerous state triggered by a local failure of the spatial synchronization of the cellular rhythms driven by transcription/translation processes. Both deterministic and stochastic descriptions of the system are given for chemical signaling. The transformation of cells means the modification of their respective parameters responsible for chemo-mechanical interactions. The simulations reproduce a distinct behavior of invasive and localized carcinoma. Generally, the model is designed in such a way that it can be readily modified to take account of any newly understood gene regulation processes and feedback mechanisms affecting chemo-mechanical properties of cells.

  1. A deterministic mathematical model for bidirectional excluded flow with Langmuir kinetics.

    PubMed

    Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2017-01-01

    In many important cellular processes, including mRNA translation, gene transcription, phosphotransfer, and intracellular transport, biological "particles" move along some kind of "tracks". The motion of these particles can be modeled as a one-dimensional movement along an ordered sequence of sites. The biological particles (e.g., ribosomes or RNAPs) have volume and cannot surpass one another. In some cases, there is a preferred direction of movement along the track, but in general the movement may be bidirectional, and furthermore the particles may attach or detach from various regions along the tracks. We derive a new deterministic mathematical model for such transport phenomena that may be interpreted as a dynamic mean-field approximation of an important model from mechanical statistics called the asymmetric simple exclusion process (ASEP) with Langmuir kinetics. Using tools from the theory of monotone dynamical systems and contraction theory we show that the model admits a unique steady-state, and that every solution converges to this steady-state. Furthermore, we show that the model entrains (or phase locks) to periodic excitations in any of its forward, backward, attachment, or detachment rates. We demonstrate an application of this phenomenological transport model for analyzing ribosome drop off in mRNA translation.

  2. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Crump, Alex R.; Resch, Charles T.

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets.more » The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.« less

  3. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  4. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  5. Influence of the feedback loops in the trp operon of B. subtilis on the system dynamic response and noise amplitude.

    PubMed

    Zamora-Chimal, Criseida; Santillán, Moisés; Rodríguez-González, Jesús

    2012-10-07

    In this paper we introduce a mathematical model for the tryptophan operon regulatory pathway in Bacillus subtilis. This model considers the transcription-attenuation, and the enzyme-inhibition regulatory mechanisms. Special attention is paid to the estimation of all the model parameters from reported experimental data. With the aid of this model we investigate, from a mathematical-modeling point of view, whether the existing multiplicity of regulatory feedback loops is advantageous in some sense, regarding the dynamic response and the biochemical noise in the system. The tryptophan operon dynamic behavior is studied by means of deterministic numeric simulations, while the biochemical noise is analyzed with the aid of stochastic simulations. The model feasibility is tested comparing its stochastic and deterministic results with experimental reports. Our results for the wildtype and for a couple of mutant bacterial strains suggest that the enzyme-inhibition feedback loop, dynamically accelerates the operon response, and plays a major role in the reduction of biochemical noise. Also, the transcription-attenuation feedback loop makes the trp operon sensitive to changes in the endogenous tryptophan level, and increases the amplitude of the biochemical noise. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Thermostatted kinetic equations as models for complex systems in physics and life sciences.

    PubMed

    Bianca, Carlo

    2012-12-01

    Statistical mechanics is a powerful method for understanding equilibrium thermodynamics. An equivalent theoretical framework for nonequilibrium systems has remained elusive. The thermodynamic forces driving the system away from equilibrium introduce energy that must be dissipated if nonequilibrium steady states are to be obtained. Historically, further terms were introduced, collectively called a thermostat, whose original application was to generate constant-temperature equilibrium ensembles. This review surveys kinetic models coupled with time-reversible deterministic thermostats for the modeling of large systems composed both by inert matter particles and living entities. The introduction of deterministic thermostats allows to model the onset of nonequilibrium stationary states that are typical of most real-world complex systems. The first part of the paper is focused on a general presentation of the main physical and mathematical definitions and tools: nonequilibrium phenomena, Gauss least constraint principle and Gaussian thermostats. The second part provides a review of a variety of thermostatted mathematical models in physics and life sciences, including Kac, Boltzmann, Jager-Segel and the thermostatted (continuous and discrete) kinetic for active particles models. Applications refer to semiconductor devices, nanosciences, biological phenomena, vehicular traffic, social and economics systems, crowds and swarms dynamics. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Modeling stochastic noise in gene regulatory systems

    PubMed Central

    Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung

    2014-01-01

    The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368

  8. Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach

    NASA Astrophysics Data System (ADS)

    Berloff, Pavel

    2018-07-01

    This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.

  9. Crosstalk in an FDM Laboratory Setup and the Athena X-IFU End-to-End Simulator

    NASA Astrophysics Data System (ADS)

    den Hartog, R.; Kirsch, C.; de Vries, C.; Akamatsu, H.; Dauser, T.; Peille, P.; Cucchetti, E.; Jackson, B.; Bandler, S.; Smith, S.; Wilms, J.

    2018-04-01

    The impact of various crosstalk mechanisms on the performance of the Athena X-IFU instrument has been assessed with detailed end-to-end simulations. For the crosstalk in the electrical circuit, a detailed model has been developed. In this contribution, we test this model against measurements made with an FDM laboratory setup and discuss the assumption of deterministic crosstalk in the context of the weak link effect in the detectors. We conclude that crosstalk levels predicted by the model are conservative with respect to the observed levels.

  10. Direct computational approach to lattice supersymmetric quantum mechanics

    NASA Astrophysics Data System (ADS)

    Kadoh, Daisuke; Nakayama, Katsumasa

    2018-07-01

    We study the lattice supersymmetric models numerically using the transfer matrix approach. This method consists only of deterministic processes and has no statistical uncertainties. We improve it by performing a scale transformation of variables such that the Witten index is correctly reproduced from the lattice model, and the other prescriptions are shown in detail. Compared to the precious Monte-Carlo results, we can estimate the effective masses, SUSY Ward identity and the cut-off dependence of the results in high precision. Those kinds of information are useful in improving lattice formulation of supersymmetric models.

  11. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  12. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  13. Probability and Locality: Determinism Versus Indeterminism in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Dickson, William Michael

    1995-01-01

    Quantum mechanics is often taken to be necessarily probabilistic. However, this view of quantum mechanics appears to be more the result of historical accident than of careful analysis. Moreover, quantum mechanics in its usual form faces serious problems. Although the mathematical core of quantum mechanics--quantum probability theory- -does not face conceptual difficulties, the application of quantum probability to the physical world leads to problems. In particular, quantum mechanics seems incapable of describing our everyday macroscopic experience. Therefore, several authors have proposed new interpretations --including (but not limited to) modal interpretations, spontaneous localization interpretations, the consistent histories approach, and the Bohm theory--each of which deals with quantum-mechanical probabilities differently. Each of these interpretations promises to describe our macroscopic experience and, arguably, each succeeds. Is there any way to compare them? Perhaps, if we turn to another troubling aspect of quantum mechanics, non-locality. Non -locality is troubling because prima facie it threatens the compatibility of quantum mechanics with special relativity. This prima facie threat is mitigated by the no-signalling theorems in quantum mechanics, but nonetheless one may find a 'conflict of spirit' between nonlocality in quantum mechanics and special relativity. Do any of these interpretations resolve this conflict of spirit?. There is a strong relation between how an interpretation deals with quantum-mechanical probabilities and how it deals with non-locality. The main argument here is that only a completely deterministic interpretation can be completely local. That is, locality together with the empirical predictions of quantum mechanics (specifically, its strict correlations) entails determinism. But even with this entailment in hand, comparison of the various interpretations requires a look at each, to see how non-locality arises, or in the case of deterministic interpretations, whether it arises. The result of this investigation is that, at the least, deterministic interpretations are no worse off with respect to special relativity than indeterministic interpretations. This conclusion runs against a common view that deterministic interpretations, specifically the Bohm theory, have more difficulty with special relativity than other interpretations.

  14. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 2 PHARMACOKINETIC MODELING

    EPA Science Inventory

    The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...

  15. Dynamical Motor Control Learned with Deep Deterministic Policy Gradient

    PubMed Central

    2018-01-01

    Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback state. Yet, recent neuroscience studies found that the motor network may constitute an autonomous dynamical system and the temporal patterns of the control command can be contained in the dynamics of the motor network, that is, the dynamical system hypothesis (DSH). Inspired by these findings, here we propose a computational model that incorporates this neural mechanism, in which the control command could be unfolded from a dynamical controller whose initial state is specified with the task parameters. The model is trained in a trial-and-error manner in the framework of deep deterministic policy gradient (DDPG). The experimental results show that the dynamical controller successfully learns the control policy for arm reaching movements, while the analysis of the internal activities of the dynamical controller provides the computational evidence to the DSH of the neural coding in motor cortices. PMID:29666634

  16. Dynamical Motor Control Learned with Deep Deterministic Policy Gradient.

    PubMed

    Shi, Haibo; Sun, Yaoru; Li, Jie

    2018-01-01

    Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback state. Yet, recent neuroscience studies found that the motor network may constitute an autonomous dynamical system and the temporal patterns of the control command can be contained in the dynamics of the motor network, that is, the dynamical system hypothesis (DSH). Inspired by these findings, here we propose a computational model that incorporates this neural mechanism, in which the control command could be unfolded from a dynamical controller whose initial state is specified with the task parameters. The model is trained in a trial-and-error manner in the framework of deep deterministic policy gradient (DDPG). The experimental results show that the dynamical controller successfully learns the control policy for arm reaching movements, while the analysis of the internal activities of the dynamical controller provides the computational evidence to the DSH of the neural coding in motor cortices.

  17. Chaos-order transition in foraging behavior of ants.

    PubMed

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-06-10

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants' physical abilities, and ants' knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal.

  18. Chaos–order transition in foraging behavior of ants

    PubMed Central

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-01-01

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants’ physical abilities, and ants’ knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal. PMID:24912159

  19. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  20. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  1. Do rational numbers play a role in selection for stochasticity?

    PubMed

    Sinclair, Robert

    2014-01-01

    When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.

  2. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  3. Quantum Locality, Rings a Bell?: Bell's Inequality Meets Local Reality and True Determinism

    NASA Astrophysics Data System (ADS)

    Sánchez-Kuntz, Natalia; Nahmad-Achar, Eduardo

    2018-01-01

    By assuming a deterministic evolution of quantum systems and taking realism into account, we carefully build a hidden variable theory for Quantum Mechanics (QM) based on the notion of ontological states proposed by 't Hooft (The cellular automaton interpretation of quantum mechanics, arXiv:1405.1548v3, 2015; Springer Open 185, https://doi.org/10.1007/978-3-319-41285-6, 2016). We view these ontological states as the ones embedded with realism and compare them to the (usual) quantum states that represent superpositions, viewing the latter as mere information of the system they describe. Such a deterministic model puts forward conditions for the applicability of Bell's inequality: the usual inequality cannot be applied to the usual experiments. We build a Bell-like inequality that can be applied to the EPR scenario and show that this inequality is always satisfied by QM. In this way we show that QM can indeed have a local interpretation, and thus meet with the causal structure imposed by the Theory of Special Relativity in a satisfying way.

  4. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  5. A micromechanical interpretation of the temperature dependence of Beremin model parameters for french RPV steel

    NASA Astrophysics Data System (ADS)

    Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier

    2010-11-01

    Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.

  6. Effect of sample volume on metastable zone width and induction time

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2012-04-01

    The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.

  7. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  8. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  9. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.

    PubMed

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P

    2014-03-04

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.

  10. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography

    NASA Astrophysics Data System (ADS)

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  11. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography.

    PubMed

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  12. Literature review of models on tire-pavement interaction noise

    NASA Astrophysics Data System (ADS)

    Li, Tan; Burdisso, Ricardo; Sandu, Corina

    2018-04-01

    Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.

  13. Deterministic and stochastic models for middle east respiratory syndrome (MERS)

    NASA Astrophysics Data System (ADS)

    Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning

    2018-03-01

    World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.

  14. Modeling of Magnetoelastic Nanostructures with a Fully-coupled Mechanical-Micromagnetic Model and Its Applications

    NASA Astrophysics Data System (ADS)

    Liang, Cheng-Yen

    Micromagnetic simulations of magnetoelastic nanostructures traditionally rely on either the Stoner-Wohlfarth model or the Landau-Lifshitz-Gilbert (LLG) model assuming uniform strain (and/or assuming uniform magnetization). While the uniform strain assumption is reasonable when modeling magnetoelastic thin films, this constant strain approach becomes increasingly inaccurate for smaller in-plane nanoscale structures. In this dissertation, a fully-coupled finite element micromagnetic method is developed. The method deals with the micromagnetics, elastodynamics, and piezoelectric effects. The dynamics of magnetization, non-uniform strain distribution, and electric fields are iteratively solved. This more sophisticated modeling technique is critical for guiding the design process of the nanoscale strain-mediated multiferroic elements such as those needed in multiferroic systems. In this dissertation, we will study magnetic property changes (e.g., hysteresis, coercive field, and spin states) due to strain effects in nanostructures. in addition, a multiferroic memory device is studied. The electric-field-driven magnetization switching by applying voltage on patterned electrodes simulation in a nickel memory device is shown in this work. The deterministic control law for the magnetization switching in a nanoring with electric field applied to the patterned electrodes is investigated. Using the patterned electrodes, we show that strain-induced anisotropy is able to be controlled, which changes the magnetization deterministically in a nano-ring.

  15. Aspen succession in the Intermountain West: A deterministic model

    Treesearch

    Dale L. Bartos; Frederick R. Ward; George S. Innis

    1983-01-01

    A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...

  16. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  17. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based, mixed deterministic-probabilistic eruption forecasting approach in reducing and quantifying these uncertainties.

  18. Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco

    2017-11-01

    Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.

  19. Causality as an emergent macroscopic phenomenon: The Lee-Wick O(N) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinstein, Benjamin; O'Connell, Donal; Wise, Mark B.

    2009-05-15

    In quantum mechanics the deterministic property of classical physics is an emergent phenomenon appropriate only on macroscopic scales. Lee and Wick introduced Lorentz invariant quantum theories where causality is an emergent phenomenon appropriate for macroscopic time scales. In this paper we analyze a Lee-Wick version of the O(N) model. We argue that in the large-N limit this theory has a unitary and Lorentz invariant S matrix and is therefore free of paradoxes in scattering experiments. We discuss some of its acausal properties.

  20. Transfer of non-Gaussian quantum states of mechanical oscillator to light

    NASA Astrophysics Data System (ADS)

    Filip, Radim; Rakhubovsky, Andrey A.

    2015-11-01

    Non-Gaussian quantum states are key resources for quantum optics with continuous-variable oscillators. The non-Gaussian states can be deterministically prepared by a continuous evolution of the mechanical oscillator isolated in a nonlinear potential. We propose feasible and deterministic transfer of non-Gaussian quantum states of mechanical oscillators to a traveling light beam, using purely all-optical methods. The method relies on only basic feasible and high-quality elements of quantum optics: squeezed states of light, linear optics, homodyne detection, and electro-optical feedforward control of light. By this method, a wide range of novel non-Gaussian states of light can be produced in the future from the mechanical states of levitating particles in optical tweezers, including states necessary for the implementation of an important cubic phase gate.

  1. Stochastic models for regulatory networks of the genetic toggle switch.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2006-05-30

    Bistability arises within a wide range of biological systems from the lambda phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.

  2. Stochastic models for regulatory networks of the genetic toggle switch

    PubMed Central

    Tian, Tianhai; Burrage, Kevin

    2006-01-01

    Bistability arises within a wide range of biological systems from the λ phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks. PMID:16714385

  3. The Non-Signalling theorem in generalizations of Bell's theorem

    NASA Astrophysics Data System (ADS)

    Walleczek, J.; Grössing, G.

    2014-04-01

    Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.

  4. Tularosa Basin Play Fairway Analysis Data and Models

    DOE Data Explorer

    Nash, Greg

    2017-07-11

    This submission includes raster datasets for each layer of evidence used for weights of evidence analysis as well as the deterministic play fairway analysis (PFA). Data representative of heat, permeability and groundwater comprises some of the raster datasets. Additionally, the final deterministic PFA model is provided along with a certainty model. All of these datasets are best used with an ArcGIS software package, specifically Spatial Data Modeler.

  5. A Stochastic Tick-Borne Disease Model: Exploring the Probability of Pathogen Persistence.

    PubMed

    Maliyoni, Milliward; Chirove, Faraimunashe; Gaff, Holly D; Govinder, Keshlan S

    2017-09-01

    We formulate and analyse a stochastic epidemic model for the transmission dynamics of a tick-borne disease in a single population using a continuous-time Markov chain approach. The stochastic model is based on an existing deterministic metapopulation tick-borne disease model. We compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in tick-borne disease dynamics. The probability of disease extinction and that of a major outbreak are computed and approximated using the multitype Galton-Watson branching process and numerical simulations, respectively. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that a disease outbreak is more likely if the disease is introduced by infected deer as opposed to infected ticks. These insights demonstrate the importance of host movement in the expansion of tick-borne diseases into new geographic areas.

  6. Performance Analysis of Stop-Skipping Scheduling Plans in Rail Transit under Time-Dependent Demand

    PubMed Central

    Cao, Zhichao; Yuan, Zhenzhou; Zhang, Silin

    2016-01-01

    Stop-skipping is a key method for alleviating congestion in rail transit, where schedules are sometimes difficult to implement. Several mechanisms have been proposed and analyzed in the literature, but very few performance comparisons are available. This study formulated train choice behavior estimation into the model considering passengers’ perception. If a passenger’s train path can be identified, this information would be useful for improving the stop-skipping schedule service. Multi-performance is a key characteristic of our proposed five stop-skipping schedules, but quantified analysis can be used to illustrate the different effects of well-known deterministic and stochastic forms. Problems in the novel category of forms were justified in the context of a single line rather than transit network. We analyzed four deterministic forms based on the well-known A/B stop-skipping operating strategy. A stochastic form was innovatively modeled as a binary integer programming problem. We present a performance analysis of our proposed model to demonstrate that stop-skipping can feasibly be used to improve the service of passengers and enhance the elasticity of train operations under demand variations along with an explicit parametric discussion. PMID:27420087

  7. Performance Analysis of Stop-Skipping Scheduling Plans in Rail Transit under Time-Dependent Demand.

    PubMed

    Cao, Zhichao; Yuan, Zhenzhou; Zhang, Silin

    2016-07-13

    Stop-skipping is a key method for alleviating congestion in rail transit, where schedules are sometimes difficult to implement. Several mechanisms have been proposed and analyzed in the literature, but very few performance comparisons are available. This study formulated train choice behavior estimation into the model considering passengers' perception. If a passenger's train path can be identified, this information would be useful for improving the stop-skipping schedule service. Multi-performance is a key characteristic of our proposed five stop-skipping schedules, but quantified analysis can be used to illustrate the different effects of well-known deterministic and stochastic forms. Problems in the novel category of forms were justified in the context of a single line rather than transit network. We analyzed four deterministic forms based on the well-known A/B stop-skipping operating strategy. A stochastic form was innovatively modeled as a binary integer programming problem. We present a performance analysis of our proposed model to demonstrate that stop-skipping can feasibly be used to improve the service of passengers and enhance the elasticity of train operations under demand variations along with an explicit parametric discussion.

  8. Three-dimensional reconstructions come to life--interactive 3D PDF animations in functional morphology.

    PubMed

    van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander

    2014-01-01

    Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering.

  9. Three-Dimensional Reconstructions Come to Life – Interactive 3D PDF Animations in Functional Morphology

    PubMed Central

    van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander

    2014-01-01

    Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering. PMID:25029366

  10. Combining Deterministic structures and stochastic heterogeneity for transport modeling

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg

    2017-04-01

    Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.

  11. Modeling the within-host dynamics of cholera: bacterial-viral interaction.

    PubMed

    Wang, Xueying; Wang, Jin

    2017-08-01

    Novel deterministic and stochastic models are proposed in this paper for the within-host dynamics of cholera, with a focus on the bacterial-viral interaction. The deterministic model is a system of differential equations describing the interaction among the two types of vibrios and the viruses. The stochastic model is a system of Markov jump processes that is derived based on the dynamics of the deterministic model. The multitype branching process approximation is applied to estimate the extinction probability of bacteria and viruses within a human host during the early stage of the bacterial-viral infection. Accordingly, a closed-form expression is derived for the disease extinction probability, and analytic estimates are validated with numerical simulations. The local and global dynamics of the bacterial-viral interaction are analysed using the deterministic model, and the result indicates that there is a sharp disease threshold characterized by the basic reproduction number [Formula: see text]: if [Formula: see text], vibrios ingested from the environment into human body will not cause cholera infection; if [Formula: see text], vibrios will grow with increased toxicity and persist within the host, leading to human cholera. In contrast, the stochastic model indicates, more realistically, that there is always a positive probability of disease extinction within the human host.

  12. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    PubMed

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  13. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem

    PubMed Central

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.

    2014-01-01

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501

  14. Chaotic dynamics and control of deterministic ratchets.

    PubMed

    Family, Fereydoon; Larrondo, H A; Zarlenga, D G; Arizmendi, C M

    2005-11-30

    Deterministic ratchets, in the inertial and also in the overdamped limit, have a very complex dynamics, including chaotic motion. This deterministically induced chaos mimics, to some extent, the role of noise, changing, on the other hand, some of the basic properties of thermal ratchets; for example, inertial ratchets can exhibit multiple reversals in the current direction. The direction depends on the amount of friction and inertia, which makes it especially interesting for technological applications such as biological particle separation. We overview in this work different strategies to control the current of inertial ratchets. The control parameters analysed are the strength and frequency of the periodic external force, the strength of the quenched noise that models a non-perfectly-periodic potential, and the mass of the particles. Control mechanisms are associated with the fractal nature of the basins of attraction of the mean velocity attractors. The control of the overdamped motion of noninteracting particles in a rocking periodic asymmetric potential is also reviewed. The analysis is focused on synchronization of the motion of the particles with the external sinusoidal driving force. Two cases are considered: a perfect lattice without disorder and a lattice with noncorrelated quenched noise. The amplitude of the driving force and the strength of the quenched noise are used as control parameters.

  15. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  16. Evolution with Stochastic Fitness and Stochastic Migration

    PubMed Central

    Rice, Sean H.; Papadopoulos, Anthony

    2009-01-01

    Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580

  17. Chaotic Ising-like dynamics in traffic signals

    PubMed Central

    Suzuki, Hideyuki; Imura, Jun-ichi; Aihara, Kazuyuki

    2013-01-01

    The green and red lights of a traffic signal can be viewed as the up and down states of an Ising spin. Moreover, traffic signals in a city interact with each other, if they are controlled in a decentralised way. In this paper, a simple model of such interacting signals on a finite-size two-dimensional lattice is shown to have Ising-like dynamics that undergoes a ferromagnetic phase transition. Probabilistic behaviour of the model is realised by chaotic billiard dynamics that arises from coupled non-chaotic elements. This purely deterministic model is expected to serve as a starting point for considering statistical mechanics of traffic signals. PMID:23350034

  18. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    NASA Astrophysics Data System (ADS)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions Euros, shows that geological and geophysical investigations necessary to assess a reliable deterministic hazard evaluation are largely justified.

  19. Insights into the deterministic skill of air quality ensembles ...

    EPA Pesticide Factsheets

    Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati

  20. The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)

    PubMed Central

    Smith, Philip L.; Ratcliff, Roger; McKoon, Gail

    2015-01-01

    Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314

  1. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  2. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  3. Mechanisms resulting in accreted ice roughness

    NASA Technical Reports Server (NTRS)

    Bilanin, Alan J.; Chua, Kiat

    1992-01-01

    Icing tests conducted on rotating cylinders in the BF Goodrich's Icing Research Facility indicate that a regular, deterministic, icing roughness pattern is typical. The roughness pattern is similar to kernels of corn on a cob for cylinders of diameter typical of a cob. An analysis is undertaken to determine the mechanisms which result in this roughness to ascertain surface scale and amplitude of roughness. Since roughness and the resulting augmentation of the convected heat transfer coefficient has been determined to most strongly control the accreted ice in ice prediction codes, the ability to predict a priori, location, amplitude and surface scale of roughness would greatly augment the capabilities of current ice accretion models.

  4. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  5. Stochastic and deterministic model of microbial heat inactivation.

    PubMed

    Corradini, Maria G; Normand, Mark D; Peleg, Micha

    2010-03-01

    Microbial inactivation is described by a model based on the changing survival probabilities of individual cells or spores. It is presented in a stochastic and discrete form for small groups, and as a continuous deterministic model for larger populations. If the underlying mortality probability function remains constant throughout the treatment, the model generates first-order ("log-linear") inactivation kinetics. Otherwise, it produces survival patterns that include Weibullian ("power-law") with upward or downward concavity, tailing with a residual survival level, complete elimination, flat "shoulder" with linear or curvilinear continuation, and sigmoid curves. In both forms, the same algorithm or model equation applies to isothermal and dynamic heat treatments alike. Constructing the model does not require assuming a kinetic order or knowledge of the inactivation mechanism. The general features of its underlying mortality probability function can be deduced from the experimental survival curve's shape. Once identified, the function's coefficients, the survival parameters, can be estimated directly from the experimental survival ratios by regression. The model is testable in principle but matching the estimated mortality or inactivation probabilities with those of the actual cells or spores can be a technical challenge. The model is not intended to replace current models to calculate sterility. Its main value, apart from connecting the various inactivation patterns to underlying probabilities at the cellular level, might be in simulating the irregular survival patterns of small groups of cells and spores. In principle, it can also be used for nonthermal methods of microbial inactivation and their combination with heat.

  6. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  7. Impacts of Considering Climate Variability on Investment Decisions in Ethiopia

    NASA Astrophysics Data System (ADS)

    Strzepek, K.; Block, P.; Rosegrant, M.; Diao, X.

    2005-12-01

    In Ethiopia, climate extremes, inducing droughts or floods, are not unusual. Monitoring the effects of these extremes, and climate variability in general, is critical for economic prediction and assessment of the country's future welfare. The focus of this study involves adding climate variability to a deterministic, mean climate-driven agro-economic model, in an attempt to understand its effects and degree of influence on general economic prediction indicators for Ethiopia. Four simulations are examined, including a baseline simulation and three investment strategies: simulations of irrigation investment, roads investment, and a combination investment of both irrigation and roads. The deterministic model is transformed into a stochastic model by dynamically adding year-to-year climate variability through climate-yield factors. Nine sets of actual, historic, variable climate data are individually assembled and implemented into the 12-year stochastic model simulation, producing an ensemble of economic prediction indicators. This ensemble allows for a probabilistic approach to planning and policy making, allowing decision makers to consider risk. The economic indicators from the deterministic and stochastic approaches, including rates of return to investments, are significantly different. The predictions of the deterministic model appreciably overestimate the future welfare of Ethiopia; the predictions of the stochastic model, utilizing actual climate data, tend to give a better semblance of what may be expected. Inclusion of climate variability is vital for proper analysis of the predictor values from this agro-economic model.

  8. Probabilistic Design Methodology and its Application to the Design of an Umbilical Retract Mechanism

    NASA Technical Reports Server (NTRS)

    Onyebueke, Landon; Ameye, Olusesan

    2002-01-01

    A lot has been learned from past experience with structural and machine element failures. The understanding of failure modes and the application of an appropriate design analysis method can lead to improved structural and machine element safety as well as serviceability. To apply Probabilistic Design Methodology (PDM), all uncertainties are modeled as random variables with selected distribution types, means, and standard deviations. It is quite difficult to achieve a robust design without considering the randomness of the design parameters which is the case in the use of the Deterministic Design Approach. The US Navy has a fleet of submarine-launched ballistic missiles. An umbilical plug joins the missile to the submarine in order to provide electrical and cooling water connections. As the missile leaves the submarine, an umbilical retract mechanism retracts the umbilical plug clear of the advancing missile after disengagement during launch and retrains the plug in the retracted position. The design of the current retract mechanism in use was based on the deterministic approach which puts emphasis on factor of safety. A new umbilical retract mechanism that is simpler in design, lighter in weight, more reliable, easier to adjust, and more cost effective has become desirable since this will increase the performance and efficiency of the system. This paper reports on a recent project performed at Tennessee State University for the US Navy that involved the application of PDM to the design of an umbilical retract mechanism. This paper demonstrates how the use of PDM lead to the minimization of weight and cost, and the maximization of reliability and performance.

  9. Subspace algorithms for identifying separable-in-denominator 2D systems with deterministic-stochastic inputs

    NASA Astrophysics Data System (ADS)

    Ramos, José A.; Mercère, Guillaume

    2016-12-01

    In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.

  10. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  11. DCBRP: a deterministic chain-based routing protocol for wireless sensor networks.

    PubMed

    Marhoon, Haydar Abdulameer; Mahmuddin, M; Nor, Shahrudin Awang

    2016-01-01

    Wireless sensor networks (WSNs) are a promising area for both researchers and industry because of their various applications The sensor node expends the majority of its energy on communication with other nodes. Therefore, the routing protocol plays an important role in delivering network data while minimizing energy consumption as much as possible. The chain-based routing approach is superior to other approaches. However, chain-based routing protocols still expend substantial energy in the Chain Head (CH) node. In addition, these protocols also have the bottleneck issues. A novel routing protocol which is Deterministic Chain-Based Routing Protocol (DCBRP). DCBRP consists of three mechanisms: Backbone Construction Mechanism, Chain Head Selection (CHS), and the Next Hop Connection Mechanism. The CHS mechanism is presented in detail, and it is evaluated through comparison with the CCM and TSCP using an ns-3 simulator. It show that DCBRP outperforms both CCM and TSCP in terms of end-to-end delay by 19.3 and 65%, respectively, CH energy consumption by 18.3 and 23.0%, respectively, overall energy consumption by 23.7 and 31.4%, respectively, network lifetime by 22 and 38%, respectively, and the energy*delay metric by 44.85 and 77.54%, respectively. DCBRP can be used in any deterministic node deployment applications, such as smart cities or smart agriculture, to reduce energy depletion and prolong the lifetimes of WSNs.

  12. Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks

    PubMed Central

    Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano

    2009-01-01

    Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345

  13. Stochastic simulations on a model of circadian rhythm generation.

    PubMed

    Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin

    2008-01-01

    Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.

  14. Understanding agent-based models of financial markets: A bottom-up approach based on order parameters and phase diagrams

    NASA Astrophysics Data System (ADS)

    Lye, Ribin; Tan, James Peng Lung; Cheong, Siew Ann

    2012-11-01

    We describe a bottom-up framework, based on the identification of appropriate order parameters and determination of phase diagrams, for understanding progressively refined agent-based models and simulations of financial markets. We illustrate this framework by starting with a deterministic toy model, whereby N independent traders buy and sell M stocks through an order book that acts as a clearing house. The price of a stock increases whenever it is bought and decreases whenever it is sold. Price changes are updated by the order book before the next transaction takes place. In this deterministic model, all traders based their buy decisions on a call utility function, and all their sell decisions on a put utility function. We then make the agent-based model more realistic, by either having a fraction fb of traders buy a random stock on offer, or a fraction fs of traders sell a random stock in their portfolio. Based on our simulations, we find that it is possible to identify useful order parameters from the steady-state price distributions of all three models. Using these order parameters as a guide, we find three phases: (i) the dead market; (ii) the boom market; and (iii) the jammed market in the phase diagram of the deterministic model. Comparing the phase diagrams of the stochastic models against that of the deterministic model, we realize that the primary effect of stochasticity is to eliminate the dead market phase.

  15. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  16. Size-dependent piezoelectric energy-harvesting analysis of micro/nano bridges subjected to random ambient excitations

    NASA Astrophysics Data System (ADS)

    Radgolchin, Moeen; Moeenfard, Hamid

    2018-02-01

    The construction of self-powered micro-electro-mechanical units by converting the mechanical energy of the systems into electrical power has attracted much attention in recent years. While power harvesting from deterministic external excitations is state of the art, it has been much more difficult to derive mathematical models for scavenging electrical energy from ambient random vibrations, due to the stochastic nature of the excitations. The current research concerns analytical modeling of micro-bridge energy harvesters based on random vibration theory. Since classical elasticity fails to accurately predict the mechanical behavior of micro-structures, strain gradient theory is employed as a powerful tool to increase the accuracy of the random vibration modeling of the micro-harvester. Equations of motion of the system in the time domain are derived using the Lagrange approach. These are then utilized to determine the frequency and impulse responses of the structure. Assuming the energy harvester to be subjected to a combination of broadband and limited-band random support motion and transverse loading, closed-form expressions for mean, mean square, correlation and spectral density of the output power are derived. The suggested formulation is further exploited to investigate the effect of the different design parameters, including the geometric properties of the structure as well as the properties of the electrical circuit on the resulting power. Furthermore, the effect of length scale parameters on the harvested energy is investigated in detail. It is observed that the predictions of classical and even simple size-dependent theories (such as couple stress) appreciably differ from the findings of strain gradient theory on the basis of random vibration. This study presents a first-time modeling of micro-scale harvesters under stochastic excitations using a size-dependent approach and can be considered as a reliable foundation for future research in the field of micro/nano harvesters subjected to non-deterministic loads.

  17. Toward quantum-like modeling of financial processes

    NASA Astrophysics Data System (ADS)

    Choustova, Olga

    2007-05-01

    We apply methods of quantum mechanics for mathematical modeling of price dynamics at the financial market. We propose to describe behavioral financial factors (e.g., expectations of traders) by using the pilot wave (Bohmian) model of quantum mechanics. Trajectories of prices are determined by two financial potentials: classical-like V(q) ("hard" market conditions, e.g., natural resources) and quantum-like U(q) (behavioral market conditions). On the one hand, our Bohmian model is a quantum-like model for the financial market, cf. with works of W. Segal, I. E. Segal, E. Haven, E. W. Piotrowski, J. Sladkowski. On the other hand, (since Bohmian mechanics provides the possibility to describe individual price trajectories) it belongs to the domain of extended research on deterministic dynamics for financial assets (C.W.J. Granger, W.A. Barnett, A. J. Benhabib, W.A. Brock, C. Sayers, J. Y. Campbell, A. W. Lo, A. C. MacKinlay, A. Serletis, S. Kuchta, M. Frank, R. Gencay, T. Stengos, M. J. Hinich, D. Patterson, D. A. Hsieh, D. T. Caplan, J.A. Scheinkman, B. LeBaron and many others).

  18. Extended Range Prediction of Indian Summer Monsoon: Current status

    NASA Astrophysics Data System (ADS)

    Sahai, A. K.; Abhilash, S.; Borah, N.; Joseph, S.; Chattopadhyay, R.; S, S.; Rajeevan, M.; Mandal, R.; Dey, A.

    2014-12-01

    The main focus of this study is to develop forecast consensus in the extended range prediction (ERP) of monsoon Intraseasonal oscillations using a suit of different variants of Climate Forecast system (CFS) model. In this CFS based Grand MME prediction system (CGMME), the ensemble members are generated by perturbing the initial condition and using different configurations of CFSv2. This is to address the role of different physical mechanisms known to have control on the error growth in the ERP in the 15-20 day time scale. The final formulation of CGMME is based on 21 ensembles of the standalone Global Forecast System (GFS) forced with bias corrected forecasted SST from CFS, 11 low resolution CFST126 and 11 high resolution CFST382. Thus, we develop the multi-model consensus forecast for the ERP of Indian summer monsoon (ISM) using a suite of different variants of CFS model. This coordinated international effort lead towards the development of specific tailor made regional forecast products over Indian region. Skill of deterministic and probabilistic categorical rainfall forecast as well the verification of large-scale low frequency monsoon intraseasonal oscillations has been carried out using hindcast from 2001-2012 during the monsoon season in which all models are initialized at every five days starting from 16May to 28 September. The skill of deterministic forecast from CGMME is better than the best participating single model ensemble configuration (SME). The CGMME approach is believed to quantify the uncertainty in both initial conditions and model formulation. Main improvement is attained in probabilistic forecast which is because of an increase in the ensemble spread, thereby reducing the error due to over-confident ensembles in a single model configuration. For probabilistic forecast, three tercile ranges are determined by ranking method based on the percentage of ensemble members from all the participating models falls in those three categories. CGMME further added value to both deterministic and probability forecast compared to raw SME's and this better skill is probably flows from large spread and improved spread-error relationship. CGMME system is currently capable of generating ER prediction in real time and successfully delivering its experimental operational ER forecast of ISM for the last few years.

  19. A Mechanistic Model of the Actin Cycle

    PubMed Central

    Bindschadler, M.; Osborn, E. A.; Dewey, C. F.; McGrath, J. L.

    2004-01-01

    We have derived a broad, deterministic model of the steady-state actin cycle that includes its major regulatory mechanisms. Ours is the first model to solve the complete nucleotide profile within filaments, a feature that determines the dynamics and geometry of actin networks at the leading edges of motile cells, and one that has challenged investigators developing models to interpret steady-state experiments. We arrived at the nucleotide profile through analytic and numerical approaches that completely agree. Our model reproduces behaviors seen in numerous experiments with purified proteins, but allows a detailed inspection of the concentrations and fluxes that might exist in these experiments. These inspections provide new insight into the mechanisms that determine the rate of actin filament treadmilling. Specifically, we find that mechanisms for enhancing Pi release from the ADP·Pi intermediate on filaments, for increasing the off rate of ADP-bound subunits at pointed ends, and the multiple, simultaneous functions of profilin, make unique and essential contributions to increased treadmilling. In combination, these mechanisms have a theoretical capacity to increase treadmilling to levels limited only by the amount of available actin. This limitation arises because as the cycle becomes more dynamic, it tends toward the unpolymerized state. PMID:15111391

  20. The threshold of a stochastic delayed SIR epidemic model with temporary immunity

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Chen, Qingmei; Jiang, Daqing

    2016-05-01

    This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  1. A Self-Organizing Incremental Spatiotemporal Associative Memory Networks Model for Problems with Hidden State

    PubMed Central

    2016-01-01

    Identifying the hidden state is important for solving problems with hidden state. We prove any deterministic partially observable Markov decision processes (POMDP) can be represented by a minimal, looping hidden state transition model and propose a heuristic state transition model constructing algorithm. A new spatiotemporal associative memory network (STAMN) is proposed to realize the minimal, looping hidden state transition model. STAMN utilizes the neuroactivity decay to realize the short-term memory, connection weights between different nodes to represent long-term memory, presynaptic potentials, and synchronized activation mechanism to complete identifying and recalling simultaneously. Finally, we give the empirical illustrations of the STAMN and compare the performance of the STAMN model with that of other methods. PMID:27891146

  2. Equation-based model for the stock market

    NASA Astrophysics Data System (ADS)

    Xavier, Paloma O. C.; Atman, A. P. F.; de Magalhães, A. R. Bosco

    2017-09-01

    We propose a stock market model which is investigated in the forms of difference and differential equations whose variables correspond to the demand or supply of each agent and to the price. In the model, agents are driven by the behavior of their trust contact network as well by fundamental analysis. By means of the deterministic version of the model, the connection between such drive mechanisms and the price is analyzed: imitation behavior promotes market instability, finitude of resources is associated to stock index stability, and high sensitivity to the fair price provokes price oscillations. Long-range correlations in the price temporal series and heavy-tailed distribution of returns are observed for the version of the model which considers different proposals for stochasticity of microeconomic and macroeconomic origins.

  3. A queuing model for road traffic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.

    We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.

  4. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    PubMed

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  5. Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.

    2012-04-23

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less

  6. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  7. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  8. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  9. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  10. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation

    PubMed Central

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487

  11. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation.

    PubMed

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.

  12. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).

  13. Deterministic chaos in entangled eigenstates

    NASA Astrophysics Data System (ADS)

    Schlegel, K. G.; Förster, S.

    2008-05-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  14. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  15. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  16. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models

    DTIC Science & Technology

    2002-03-01

    such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most

  17. Dynamic analysis of a stochastic rumor propagation model

    NASA Astrophysics Data System (ADS)

    Jia, Fangju; Lv, Guangying

    2018-01-01

    The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. In this paper, we are concerned with a stochastic rumor propagation model. Sufficient conditions for extinction and persistence in the mean of the rumor are established. The threshold between persistence in the mean and extinction of the rumor is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  18. Double diffusivity model under stochastic forcing

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Amit K.; Aifantis, Elias C.

    2017-05-01

    The "double diffusivity" model was proposed in the late 1970s, and reworked in the early 1980s, as a continuum counterpart to existing discrete models of diffusion corresponding to high diffusivity paths, such as grain boundaries and dislocation lines. It was later rejuvenated in the 1990s to interpret experimental results on diffusion in polycrystalline and nanocrystalline specimens where grain boundaries and triple grain boundary junctions act as high diffusivity paths. Technically, the model pans out as a system of coupled Fick-type diffusion equations to represent "regular" and "high" diffusivity paths with "source terms" accounting for the mass exchange between the two paths. The model remit was extended by analogy to describe flow in porous media with double porosity, as well as to model heat conduction in media with two nonequilibrium local temperature baths, e.g., ion and electron baths. Uncoupling of the two partial differential equations leads to a higher-ordered diffusion equation, solutions of which could be obtained in terms of classical diffusion equation solutions. Similar equations could also be derived within an "internal length" gradient (ILG) mechanics formulation applied to diffusion problems, i.e., by introducing nonlocal effects, together with inertia and viscosity, in a mechanics based formulation of diffusion theory. While being remarkably successful in studies related to various aspects of transport in inhomogeneous media with deterministic microstructures and nanostructures, its implications in the presence of stochasticity have not yet been considered. This issue becomes particularly important in the case of diffusion in nanopolycrystals whose deterministic ILG-based theoretical calculations predict a relaxation time that is only about one-tenth of the actual experimentally verified time scale. This article provides the "missing link" in this estimation by adding a vital element in the ILG structure, that of stochasticity, that takes into account all boundary layer fluctuations. Our stochastic-ILG diffusion calculation confirms rapprochement between theory and experiment, thereby benchmarking a new generation of gradient-based continuum models that conform closer to real-life fluctuating environments.

  19. Transient Resetting: A Novel Mechanism for Synchrony and Its Biological Examples

    PubMed Central

    Li, Chunguang; Chen, Luonan; Aihara, Kazuyuki

    2006-01-01

    The study of synchronization in biological systems is essential for the understanding of the rhythmic phenomena of living organisms at both molecular and cellular levels. In this paper, by using simple dynamical systems theory, we present a novel mechanism, named transient resetting, for the synchronization of uncoupled biological oscillators with stimuli. This mechanism not only can unify and extend many existing results on (deterministic and stochastic) stimulus-induced synchrony, but also may actually play an important role in biological rhythms. We argue that transient resetting is a possible mechanism for the synchronization in many biological organisms, which might also be further used in the medical therapy of rhythmic disorders. Examples of the synchronization of neural and circadian oscillators as well as a chaotic neuron model are presented to verify our hypothesis. PMID:16933980

  20. Integrating urban recharge uncertainty into standard groundwater modeling practice: A case study on water main break predictions for the Barton Springs segment of the Edwards Aquifer, Austin, Texas

    NASA Astrophysics Data System (ADS)

    Sinner, K.; Teasley, R. L.

    2016-12-01

    Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling

  1. Bayesian deterministic decision making: a normative account of the operant matching law and heavy-tailed reward history dependency of choices.

    PubMed

    Saito, Hiroshi; Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato

    2014-01-01

    The decision making behaviors of humans and animals adapt and then satisfy an "operant matching law" in certain type of tasks. This was first pointed out by Herrnstein in his foraging experiments on pigeons. The matching law has been one landmark for elucidating the underlying processes of decision making and its learning in the brain. An interesting question is whether decisions are made deterministically or probabilistically. Conventional learning models of the matching law are based on the latter idea; they assume that subjects learn choice probabilities of respective alternatives and decide stochastically with the probabilities. However, it is unknown whether the matching law can be accounted for by a deterministic strategy or not. To answer this question, we propose several deterministic Bayesian decision making models that have certain incorrect beliefs about an environment. We claim that a simple model produces behavior satisfying the matching law in static settings of a foraging task but not in dynamic settings. We found that the model that has a belief that the environment is volatile works well in the dynamic foraging task and exhibits undermatching, which is a slight deviation from the matching law observed in many experiments. This model also demonstrates the double-exponential reward history dependency of a choice and a heavier-tailed run-length distribution, as has recently been reported in experiments on monkeys.

  2. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  3. Stochastic dynamics of cholera epidemics

    NASA Astrophysics Data System (ADS)

    Azaele, Sandro; Maritan, Amos; Bertuzzo, Enrico; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea

    2010-05-01

    We describe the predictions of an analytically tractable stochastic model for cholera epidemics following a single initial outbreak. The exact model relies on a set of assumptions that may restrict the generality of the approach and yet provides a realm of powerful tools and results. Without resorting to the depletion of susceptible individuals, as usually assumed in deterministic susceptible-infected-recovered models, we show that a simple stochastic equation for the number of ill individuals provides a mechanism for the decay of the epidemics occurring on the typical time scale of seasonality. The model is shown to provide a reasonably accurate description of the empirical data of the 2000/2001 cholera epidemic which took place in the Kwa Zulu-Natal Province, South Africa, with possibly notable epidemiological implications.

  4. Ikeda-like chaos on a dynamically filtered supercontinuum light source

    NASA Astrophysics Data System (ADS)

    Chembo, Yanne K.; Jacquot, Maxime; Dudley, John M.; Larger, Laurent

    2016-08-01

    We demonstrate temporal chaos in a color-selection mechanism from the visible spectrum of a supercontinuum light source. The color-selection mechanism is governed by an acousto-optoelectronic nonlinear delayed-feedback scheme modeled by an Ikeda-like equation. Initially motivated by the design of a broad audience live demonstrator in the framework of the International Year of Light 2015, the setup also provides a different experimental tool to investigate the dynamical complexity of delayed-feedback dynamics. Deterministic hyperchaos is analyzed here from the experimental time series. A projection method identifies the delay parameter, for which the chaotic strange attractor originally evolving in an infinite-dimensional phase space can be revealed in a two-dimensional subspace.

  5. Mechanisms for the target patterns formation in a stochastic bistable excitable medium

    NASA Astrophysics Data System (ADS)

    Verisokin, Andrey Yu.; Verveyko, Darya V.; Postnov, Dmitry E.

    2018-04-01

    We study the features of formation and evolution of spatiotemporal chaotic regime generated by autonomous pacemakers in excitable deterministic and stochastic bistable active media using the example of the FitzHugh - Nagumo biological neuron model under discrete medium conditions. The following possible mechanisms for the formation of autonomous pacemakers have been studied: 1) a temporal external force applied to a small region of the medium, 2) geometry of the solution region (the medium contains regions with Dirichlet or Neumann boundaries). In our work we explore the conditions for the emergence of pacemakers inducing target patterns in a stochastic bistable excitable system and propose the algorithm for their analysis.

  6. Computational Thermomechanical Modelling of Early-Age Silicate Composites

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.; Kozák, V.

    2009-09-01

    Strains and stresses in early-age silicate composites, widely used in civil engineering, especially in fresh concrete mixtures, in addition to those caused by exterior mechanical loads, are results of complicated non-deterministic physical and chemical processes. Their numerical prediction at the macro-scale level requires the non-trivial physical analysis based on the thermodynamic principles, making use of micro-structural information from both theoretical and experimental research. The paper introduces a computational model, based on a nonlinear system of macroscopic equations of evolution, supplied with certain effective material characteristics, coming from the micro-scale analysis, and sketches the algorithm for its numerical analysis.

  7. Behavior dynamics: One perspective

    PubMed Central

    Marr, M. Jackson

    1992-01-01

    Behavior dynamics is a field devoted to analytic descriptions of behavior change. A principal source of both models and methods for these descriptions is found in physics. This approach is an extension of a long conceptual association between behavior analysis and physics. A theme common to both is the role of molar versus molecular events in description and prediction. Similarities and differences in how these events are treated are discussed. Two examples are presented that illustrate possible correspondence between mechanical and behavioral systems. The first demonstrates the use of a mechanical model to describe the molar properties of behavior under changing reinforcement conditions. The second, dealing with some features of concurrent schedules, focuses on the possible utility of nonlinear dynamical systems to the description of both molar and molecular behavioral events as the outcome of a deterministic, but chaotic, process. PMID:16812655

  8. Robust planning of dynamic wireless charging infrastructure for battery electric buses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhaocai; Song, Ziqi

    Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less

  9. Robust planning of dynamic wireless charging infrastructure for battery electric buses

    DOE PAGES

    Liu, Zhaocai; Song, Ziqi

    2017-10-01

    Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less

  10. Multi-parametric variational data assimilation for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  11. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  12. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  13. Multidisciplinary insight into clonal expansion of HTLV-1-infected cells in adult T-cell leukemia via modeling by deterministic finite automata coupled with high-throughput sequencing.

    PubMed

    Farmanbar, Amir; Firouzi, Sanaz; Park, Sung-Joon; Nakai, Kenta; Uchimaru, Kaoru; Watanabe, Toshiki

    2017-01-31

    Clonal expansion of leukemic cells leads to onset of adult T-cell leukemia (ATL), an aggressive lymphoid malignancy with a very poor prognosis. Infection with human T-cell leukemia virus type-1 (HTLV-1) is the direct cause of ATL onset, and integration of HTLV-1 into the human genome is essential for clonal expansion of leukemic cells. Therefore, monitoring clonal expansion of HTLV-1-infected cells via isolation of integration sites assists in analyzing infected individuals from early infection to the final stage of ATL development. However, because of the complex nature of clonal expansion, the underlying mechanisms have yet to be clarified. Combining computational/mathematical modeling with experimental and clinical data of integration site-based clonality analysis derived from next generation sequencing technologies provides an appropriate strategy to achieve a better understanding of ATL development. As a comprehensively interdisciplinary project, this study combined three main aspects: wet laboratory experiments, in silico analysis and empirical modeling. We analyzed clinical samples from HTLV-1-infected individuals with a broad range of proviral loads using a high-throughput methodology that enables isolation of HTLV-1 integration sites and accurate measurement of the size of infected clones. We categorized clones into four size groups, "very small", "small", "big", and "very big", based on the patterns of clonal growth and observed clone sizes. We propose an empirical formal model based on deterministic finite state automata (DFA) analysis of real clinical samples to illustrate patterns of clonal expansion. Through the developed model, we have translated biological data of clonal expansion into the formal language of mathematics and represented the observed clonality data with DFA. Our data suggest that combining experimental data (absolute size of clones) with DFA can describe the clonality status of patients. This kind of modeling provides a basic understanding as well as a unique perspective for clarifying the mechanisms of clonal expansion in ATL.

  14. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  15. A Model of Yeast Cell-Cycle Regulation Based on a Standard Component Modeling Strategy for Protein Regulatory Networks.

    PubMed

    Laomettachit, Teeraphan; Chen, Katherine C; Baumann, William T; Tyson, John J

    2016-01-01

    To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a "standard component" modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with "standard components" can capture in quantitative detail many essential properties of cell cycle control in budding yeast.

  16. A Model of Yeast Cell-Cycle Regulation Based on a Standard Component Modeling Strategy for Protein Regulatory Networks

    PubMed Central

    Laomettachit, Teeraphan; Chen, Katherine C.; Baumann, William T.

    2016-01-01

    To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a “standard component” modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with “standard components” can capture in quantitative detail many essential properties of cell cycle control in budding yeast. PMID:27187804

  17. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  18. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.

    PubMed

    Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio

    2010-03-26

    Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  19. Observations, theoretical ideas and modeling of turbulent flows: Past, present and future

    NASA Technical Reports Server (NTRS)

    Chapman, G. T.; Tobak, M.

    1985-01-01

    Turbulence was analyzed in a historical context featuring the interactions between observations, theoretical ideas, and modeling within three successive movements. These are identified as predominantly statistical, structural and deterministic. The statistical movement is criticized for its failure to deal with the structural elements observed in turbulent flows. The structural movement is criticized for its failure to embody observed structural elements within a formal theory. The deterministic movement is described as having the potential of overcoming these deficiencies by allowing structural elements to exhibit chaotic behavior that is nevertheless embodied within a theory. Four major ideas of this movement are described: bifurcation theory, strange attractors, fractals, and the renormalization group. A framework for the future study of turbulent flows is proposed, based on the premises of the deterministic movement.

  20. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastien; Stollenwerk, Nico

    2010-09-01

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  1. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguiar, Maira; Ballesteros, Sebastien; Stollenwerk, Nico

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  2. The threshold of a stochastic delayed SIR epidemic model with vaccination

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Jiang, Daqing

    2016-11-01

    In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.

  3. The deterministic optical alignment of the HERMES spectrograph

    NASA Astrophysics Data System (ADS)

    Gers, Luke; Staszak, Nicholas

    2014-07-01

    The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.

  4. On the deterministic and stochastic use of hydrologic models

    USGS Publications Warehouse

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  5. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  6. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  7. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    PubMed

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Micro-level dynamics of the online information propagation: A user behavior model based on noisy spiking neurons.

    PubMed

    Lymperopoulos, Ilias N; Ioannou, George D

    2016-10-01

    We develop and validate a model of the micro-level dynamics underlying the formation of macro-level information propagation patterns in online social networks. In particular, we address the dynamics at the level of the mechanism regulating a user's participation in an online information propagation process. We demonstrate that this mechanism can be realistically described by the dynamics of noisy spiking neurons driven by endogenous and exogenous, deterministic and stochastic stimuli representing the influence modulating one's intention to be an information spreader. Depending on the dynamically changing influence characteristics, time-varying propagation patterns emerge reflecting the temporal structure, strength, and signal-to-noise ratio characteristics of the stimulation driving the online users' information sharing activity. The proposed model constitutes an overarching, novel, and flexible approach to the modeling of the micro-level mechanisms whereby information propagates in online social networks. As such, it can be used for a comprehensive understanding of the online transmission of information, a process integral to the sociocultural evolution of modern societies. The proposed model is highly adaptable and suitable for the study of the propagation patterns of behavior, opinions, and innovations among others. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  10. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  11. Noise-induced extinction for a ratio-dependent predator-prey model with strong Allee effect in prey

    NASA Astrophysics Data System (ADS)

    Mandal, Partha Sarathi

    2018-04-01

    In this paper, we study a stochastically forced ratio-dependent predator-prey model with strong Allee effect in prey population. In the deterministic case, we show that the model exhibits the stable interior equilibrium point or limit cycle corresponding to the co-existence of both species. We investigate a probabilistic mechanism of the noise-induced extinction in a zone of stable interior equilibrium point. Computational methods based on the stochastic sensitivity function technique are applied for the analysis of the dispersion of random states near stable interior equilibrium point. This method allows to construct a confidence domain and estimate the threshold value of the noise intensity for a transition from the coexistence to the extinction.

  12. Extraction of angle deterministic signals in the presence of stationary speed fluctuations with cyclostationary blind source separation

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Antoni, J.

    2012-02-01

    This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.

  13. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  14. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.

    PubMed

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2017-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.

  15. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments

    PubMed Central

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2018-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801

  16. Neo-deterministic seismic hazard assessment in North Africa

    NASA Astrophysics Data System (ADS)

    Mourabit, T.; Abou Elenean, K. M.; Ayadi, A.; Benouar, D.; Ben Suleman, A.; Bezzeghoud, M.; Cheddadi, A.; Chourak, M.; ElGabry, M. N.; Harbi, A.; Hfaiedh, M.; Hussein, H. M.; Kacem, J.; Ksentini, A.; Jabour, N.; Magrin, A.; Maouche, S.; Meghraoui, M.; Ousadou, F.; Panza, G. F.; Peresan, A.; Romdhane, N.; Vaccari, F.; Zuccolo, E.

    2014-04-01

    North Africa is one of the most earthquake-prone areas of the Mediterranean. Many devastating earthquakes, some of them tsunami-triggering, inflicted heavy loss of life and considerable economic damage to the region. In order to mitigate the destructive impact of the earthquakes, the regional seismic hazard in North Africa is assessed using the neo-deterministic, multi-scenario methodology (NDSHA) based on the computation of synthetic seismograms, using the modal summation technique, at a regular grid of 0.2 × 0.2°. This is the first study aimed at producing NDSHA maps of North Africa including five countries: Morocco, Algeria, Tunisia, Libya, and Egypt. The key input data for the NDSHA algorithm are earthquake sources, seismotectonic zonation, and structural models. In the preparation of the input data, it has been really important to go beyond the national borders and to adopt a coherent strategy all over the area. Thanks to the collaborative efforts of the teams involved, it has been possible to properly merge the earthquake catalogues available for each country to define with homogeneous criteria the seismogenic zones, the characteristic focal mechanism associated with each of them, and the structural models used to model wave propagation from the sources to the sites. As a result, reliable seismic hazard maps are produced in terms of maximum displacement ( D max), maximum velocity ( V max), and design ground acceleration.

  17. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  18. Bell Nonlocality, Signal Locality and Unpredictability (or What Bohr Could Have Told Einstein at Solvay Had He Known About Bell Experiments)

    NASA Astrophysics Data System (ADS)

    Cavalcanti, Eric G.; Wiseman, Howard M.

    2012-10-01

    The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.

  19. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  20. Optimal Vaccination in a Stochastic Epidemic Model of Two Non-Interacting Populations

    DTIC Science & Technology

    2015-02-17

    of diminishing returns from vacci- nation will generally take place at smaller vaccine allocations V compared to the deterministic model. Optimal...take place and small r0 values where it does not is illustrat- ed in Fig. 4C. As r0 is decreased, the region between the two instances of switching...approximately distribute vaccine in proportion to population size. For large r0 (r0 ≳ 2.9), two switches take place . In the deterministic optimal solution, a

  1. Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján

    2017-06-01

    It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.

  2. Size, time, and asynchrony matter: the species-area relationship for parasites of freshwater fishes.

    PubMed

    Zelmer, Derek A

    2014-10-01

    The tendency to attribute species-area relationships to "island biogeography" effectively bypasses the examination of specific mechanisms that act to structure parasite communities. Positive covariation between fish size and infrapopulation richness should not be examined within the typical extinction-based paradigm, but rather should be addressed from the standpoint of differences in colonization potential among individual hosts. Although most mechanisms producing the aforementioned pattern constitute some variation of passive sampling, the deterministic aspects of the accumulation of parasite individuals by fish hosts makes untenable the suggestion that infracommunities of freshwater fishes are stochastic assemblages. At the component community level, application of extinction-dependent mechanisms might be appropriate, given sufficient time for colonization, but these structuring forces likely act indirectly through their effects on the host community to increase the probability of parasite persistence. At all levels, the passive sampling hypothesis is a relevant null model. The tendency for mechanisms that produce species-area relationships to produce nested subset patterns means that for most systems, the passive sampling hypothesis can be addressed through the application of appropriate null models of nested subset structure.

  3. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  4. From Weakly Chaotic Dynamics to Deterministic Subdiffusion via Copula Modeling

    NASA Astrophysics Data System (ADS)

    Nazé, Pierre

    2018-03-01

    Copula modeling consists in finding a probabilistic distribution, called copula, whereby its coupling with the marginal distributions of a set of random variables produces their joint distribution. The present work aims to use this technique to connect the statistical distributions of weakly chaotic dynamics and deterministic subdiffusion. More precisely, we decompose the jumps distribution of Geisel-Thomae map into a bivariate one and determine the marginal and copula distributions respectively by infinite ergodic theory and statistical inference techniques. We verify therefore that the characteristic tail distribution of subdiffusion is an extreme value copula coupling Mittag-Leffler distributions. We also present a method to calculate the exact copula and joint distributions in the case where weakly chaotic dynamics and deterministic subdiffusion statistical distributions are already known. Numerical simulations and consistency with the dynamical aspects of the map support our results.

  5. Necessary conditions for the emergence of homochirality via autocatalytic self-replication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stich, Michael; Ribó, Josep M.; Blackmond, Donna G., E-mail: blackmond@scripps.edu

    We analyze a recent proposal for spontaneous mirror symmetry breaking based on the coupling of first-order enantioselective autocatalysis and direct production of the enantiomers that invokes a critical role for intrinsic reaction noise. For isolated systems, the racemic state is the unique stable outcome for both stochastic and deterministic dynamics when the system is in compliance with the constraints dictated by the thermodynamics of chemical reaction processes. In open systems, the racemic outcome also results for both stochastic and deterministic dynamics when driving the autocatalysis unidirectionally by external reagents. Nonracemic states can result in the latter only if the reversemore » reactions are strictly zero: these are kinetically controlled outcomes for small populations and volumes, and can be simulated by stochastic dynamics. However, the stability of the thermodynamic limit proves that the racemic outcome is the unique stable state for strictly irreversible externally driven autocatalysis. These findings contradict the suggestion that the inhibition requirement of the Frank autocatalytic model for the emergence of homochirality may be relaxed in a noise-induced mechanism.« less

  6. Simulation of the M13 life cycle I: Assembly of a genetically-structured deterministic chemical kinetic simulation.

    PubMed

    Smeal, Steven W; Schmitt, Margaret A; Pereira, Ronnie Rodrigues; Prasad, Ashok; Fisk, John D

    2017-01-01

    To expand the quantitative, systems level understanding and foster the expansion of the biotechnological applications of the filamentous bacteriophage M13, we have unified the accumulated quantitative information on M13 biology into a genetically-structured, experimentally-based computational simulation of the entire phage life cycle. The deterministic chemical kinetic simulation explicitly includes the molecular details of DNA replication, mRNA transcription, protein translation and particle assembly, as well as the competing protein-protein and protein-nucleic acid interactions that control the timing and extent of phage production. The simulation reproduces the holistic behavior of M13, closely matching experimentally reported values of the intracellular levels of phage species and the timing of events in the M13 life cycle. The computational model provides a quantitative description of phage biology, highlights gaps in the present understanding of M13, and offers a framework for exploring alternative mechanisms of regulation in the context of the complete M13 life cycle. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. From qualitative data to quantitative models: analysis of the phage shock protein stress response in Escherichia coli

    PubMed Central

    2011-01-01

    Background Bacteria have evolved a rich set of mechanisms for sensing and adapting to adverse conditions in their environment. These are crucial for their survival, which requires them to react to extracellular stresses such as heat shock, ethanol treatment or phage infection. Here we focus on studying the phage shock protein (Psp) stress response in Escherichia coli induced by a phage infection or other damage to the bacterial membrane. This system has not yet been theoretically modelled or analysed in silico. Results We develop a model of the Psp response system, and illustrate how such models can be constructed and analyzed in light of available sparse and qualitative information in order to generate novel biological hypotheses about their dynamical behaviour. We analyze this model using tools from Petri-net theory and study its dynamical range that is consistent with currently available knowledge by conditioning model parameters on the available data in an approximate Bayesian computation (ABC) framework. Within this ABC approach we analyze stochastic and deterministic dynamics. This analysis allows us to identify different types of behaviour and these mechanistic insights can in turn be used to design new, more detailed and time-resolved experiments. Conclusions We have developed the first mechanistic model of the Psp response in E. coli. This model allows us to predict the possible qualitative stochastic and deterministic dynamic behaviours of key molecular players in the stress response. Our inferential approach can be applied to stress response and signalling systems more generally: in the ABC framework we can condition mathematical models on qualitative data in order to delimit e.g. parameter ranges or the qualitative system dynamics in light of available end-point or qualitative information. PMID:21569396

  8. Nonlinear modeling of chaotic time series: Theory and applications

    NASA Astrophysics Data System (ADS)

    Casdagli, M.; Eubank, S.; Farmer, J. D.; Gibson, J.; Desjardins, D.; Hunter, N.; Theiler, J.

    We review recent developments in the modeling and prediction of nonlinear time series. In some cases, apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases, it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years, methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics, and human speech.

  9. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    PubMed

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  10. Modeling disease transmission near eradication: An equation free approach

    NASA Astrophysics Data System (ADS)

    Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan

    2015-01-01

    Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.

  11. A deterministic model of electron transport for electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Bünger, J.; Richter, S.; Torrilhon, M.

    2018-01-01

    Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.

  12. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  13. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  14. Ensemble assimilation of ARGO temperature profile, sea surface temperature, and altimetric satellite data into an eddy permitting primitive equation model of the North Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Candille, G.; Brankart, J. M.; Brasseur, P.

    2015-07-01

    Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. An incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semiindependent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analyzed jointly. The consistency and complementarity between both validations are highlighted.

  15. A Random Variable Approach to Nuclear Targeting and Survivability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Undem, Halvor A.

    We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less

  16. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA

    PubMed Central

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.

    2017-01-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099

  17. Classification and unification of the microscopic deterministic traffic models.

    PubMed

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.

  18. A stochastic model for correlated protein motions

    NASA Astrophysics Data System (ADS)

    Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem

    2006-06-01

    A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.

  19. Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.

    PubMed

    Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong

    2018-08-01

    Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq < 0.2 μm). Furthermore, optimal roughness values for minimizing the friction coefficient are recommended.

  20. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  1. A Variational Approach to the Analysis of Dissipative Electromechanical Systems

    PubMed Central

    Allison, Andrew; Pearce, Charles E. M.; Abbott, Derek

    2014-01-01

    We develop a method for systematically constructing Lagrangian functions for dissipative mechanical, electrical, and electromechanical systems. We derive the equations of motion for some typical electromechanical systems using deterministic principles that are strictly variational. We do not use any ad hoc features that are added on after the analysis has been completed, such as the Rayleigh dissipation function. We generalise the concept of potential, and define generalised potentials for dissipative lumped system elements. Our innovation offers a unified approach to the analysis of electromechanical systems where there are energy and power terms in both the mechanical and electrical parts of the system. Using our novel technique, we can take advantage of the analytic approach from mechanics, and we can apply these powerful analytical methods to electrical and to electromechanical systems. We can analyse systems that include non-conservative forces. Our methodology is deterministic, and does does require any special intuition, and is thus suitable for automation via a computer-based algebra package. PMID:24586221

  2. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  3. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE PAGES

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    2017-10-31

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  4. Optimal control of hydroelectric facilities

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi

    This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the challenging problem of optimizing a sequence of two hydro dams sharing the same river system. The complexity of this problem is magnified and we just scratch its surface here. The thesis concludes with suggestions for future work in this fertile area. Keywords: dynamic programming, hydroelectric facility, optimization, optimal control, switching cost, turbine efficiency.

  5. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  6. Hybrid stochastic and deterministic simulations of calcium blips.

    PubMed

    Rüdiger, S; Shuai, J W; Huisinga, W; Nagaiah, C; Warnecke, G; Parker, I; Falcke, M

    2007-09-15

    Intracellular calcium release is a prime example for the role of stochastic effects in cellular systems. Recent models consist of deterministic reaction-diffusion equations coupled to stochastic transitions of calcium channels. The resulting dynamics is of multiple time and spatial scales, which complicates far-reaching computer simulations. In this article, we introduce a novel hybrid scheme that is especially tailored to accurately trace events with essential stochastic variations, while deterministic concentration variables are efficiently and accurately traced at the same time. We use finite elements to efficiently resolve the extreme spatial gradients of concentration variables close to a channel. We describe the algorithmic approach and we demonstrate its efficiency compared to conventional methods. Our single-channel model matches experimental data and results in intriguing dynamics if calcium is used as charge carrier. Random openings of the channel accumulate in bursts of calcium blips that may be central for the understanding of cellular calcium dynamics.

  7. Detecting and disentangling nonlinear structure from solar flux time series

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.; Roszman, L.

    1992-01-01

    Interest in solar activity has grown in the past two decades for many reasons. Most importantly for flight dynamics, solar activity changes the atmospheric density, which has important implications for spacecraft trajectory and lifetime prediction. Building upon the previously developed Rayleigh-Benard nonlinear dynamic solar model, which exhibits many dynamic behaviors observed in the Sun, this work introduces new chaotic solar forecasting techniques. Our attempt to use recently developed nonlinear chaotic techniques to model and forecast solar activity has uncovered highly entangled dynamics. Numerical techniques for decoupling additive and multiplicative white noise from deterministic dynamics and examines falloff of the power spectra at high frequencies as a possible means of distinguishing deterministic chaos from noise than spectrally white or colored are presented. The power spectral techniques presented are less cumbersome than current methods for identifying deterministic chaos, which require more computationally intensive calculations, such as those involving Lyapunov exponents and attractor dimension.

  8. Stochastic Analysis and Probabilistic Downscaling of Soil Moisture

    NASA Astrophysics Data System (ADS)

    Deshon, J. P.; Niemann, J. D.; Green, T. R.; Jones, A. S.

    2017-12-01

    Soil moisture is a key variable for rainfall-runoff response estimation, ecological and biogeochemical flux estimation, and biodiversity characterization, each of which is useful for watershed condition assessment. These applications require not only accurate, fine-resolution soil-moisture estimates but also confidence limits on those estimates and soil-moisture patterns that exhibit realistic statistical properties (e.g., variance and spatial correlation structure). The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution (9-40 km) soil moisture from satellite remote sensing or land-surface models to produce fine-resolution (10-30 m) estimates. The model was designed to produce accurate deterministic soil-moisture estimates at multiple points, but the resulting patterns do not reproduce the variance or spatial correlation of observed soil-moisture patterns. The primary objective of this research is to generalize the EMT+VS model to produce a probability density function (pdf) for soil moisture at each fine-resolution location and time. Each pdf has a mean that is equal to the deterministic soil-moisture estimate, and the pdf can be used to quantify the uncertainty in the soil-moisture estimates and to simulate soil-moisture patterns. Different versions of the generalized model are hypothesized based on how uncertainty enters the model, whether the uncertainty is additive or multiplicative, and which distributions describe the uncertainty. These versions are then tested by application to four catchments with detailed soil-moisture observations (Tarrawarra, Satellite Station, Cache la Poudre, and Nerrigundah). The performance of the generalized models is evaluated by comparing the statistical properties of the simulated soil-moisture patterns to those of the observations and the deterministic EMT+VS model. The versions of the generalized EMT+VS model with normally distributed stochastic components produce soil-moisture patterns with more realistic statistical properties than the deterministic model. Additionally, the results suggest that the variance and spatial correlation of the stochastic soil-moisture variations do not vary consistently with the spatial-average soil moisture.

  9. Time series analysis for minority game simulations of financial markets

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernando F.; Francisco, Gerson; Machado, Birajara S.; Muruganandam, Paulsamy

    2003-04-01

    The minority game (MG) model introduced recently provides promising insights into the understanding of the evolution of prices, indices and rates in the financial markets. In this paper we perform a time series analysis of the model employing tools from statistics, dynamical systems theory and stochastic processes. Using benchmark systems and a financial index for comparison, several conclusions are obtained about the generating mechanism for this kind of evolution. The motion is deterministic, driven by occasional random external perturbation. When the interval between two successive perturbations is sufficiently large, one can find low dimensional chaos in this regime. However, the full motion of the MG model is found to be similar to that of the first differences of the SP500 index: stochastic, nonlinear and (unit root) stationary.

  10. Stochastic modelling of microstructure formation in solidification processes

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu; Stefanescu, Doru M.

    1997-07-01

    To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'

  11. Multi-Scale Modeling of the Gamma Radiolysis of Nitrate Solutions.

    PubMed

    Horne, Gregory P; Donoclift, Thomas A; Sims, Howard E; Orr, Robin M; Pimblott, Simon M

    2016-11-17

    A multiscale modeling approach has been developed for the extended time scale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages: radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modeling. The first three components model the physical and chemical evolution of an isolated radiation chemical track and provide radiolysis yields, within the extremely low dose isolated track paradigm, as the input parameters for a bulk deterministic chemistry model. This approach to radiation chemical modeling has been tested by comparison with the experimentally observed yield of nitrite from the gamma radiolysis of sodium nitrate solutions. This is a complex radiation chemical system which is strongly dependent on secondary reaction processes. The concentration of nitrite is not just dependent upon the evolution of radiation track chemistry and the scavenging of the hydrated electron and its precursors but also on the subsequent reactions of the products of these scavenging reactions with other water radiolysis products. Without the inclusion of intratrack chemistry, the deterministic component of the multiscale model is unable to correctly predict experimental data, highlighting the importance of intratrack radiation chemistry in the chemical evolution of the irradiated system.

  12. Removal rate model for magnetorheological finishing of glass.

    PubMed

    Degroote, Jessica E; Marino, Anne E; Wilson, John P; Bishop, Amy L; Lambropoulos, John C; Jacobs, Stephen D

    2007-11-10

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magnetorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time, to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.

  13. Toward a Probabilistic Phenological Model for Wheat Growing Degree Days (GDD)

    NASA Astrophysics Data System (ADS)

    Rahmani, E.; Hense, A.

    2017-12-01

    Are there deterministic relations between phenological and climate parameters? The answer is surely `No'. This answer motivated us to solve the problem through probabilistic theories. Thus, we developed a probabilistic phenological model which has the advantage of giving additional information in terms of uncertainty. To that aim, we turned to a statistical analysis named survival analysis. Survival analysis deals with death in biological organisms and failure in mechanical systems. In survival analysis literature, death or failure is considered as an event. By event, in this research we mean ripening date of wheat. We will assume only one event in this special case. By time, we mean the growing duration from sowing to ripening as lifetime for wheat which is a function of GDD. To be more precise we will try to perform the probabilistic forecast for wheat ripening. The probability value will change between 0 and 1. Here, the survivor function gives the probability that the not ripened wheat survives longer than a specific time or will survive to the end of its lifetime as a ripened crop. The survival function at each station is determined by fitting a normal distribution to the GDD as the function of growth duration. Verification of the models obtained is done using CRPS skill score (CRPSS). The positive values of CRPSS indicate the large superiority of the probabilistic phonologic survival model to the deterministic models. These results demonstrate that considering uncertainties in modeling are beneficial, meaningful and necessary. We believe that probabilistic phenological models have the potential to help reduce the vulnerability of agricultural production systems to climate change thereby increasing food security.

  14. A deterministic width function model

    NASA Astrophysics Data System (ADS)

    Puente, C. E.; Sivakumar, B.

    Use of a deterministic fractal-multifractal (FM) geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States), that the FM approach may also be used to closely approximate existing width functions.

  15. Probabilistic Modeling of the Renal Stone Formation Module

    NASA Technical Reports Server (NTRS)

    Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.

    2013-01-01

    The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.

  16. Grizzly Staus Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Zhang, Yongfeng; Chakraborty, Pritam

    2014-09-01

    This report summarizes work during FY 2014 to develop capabilities to predict embrittlement of reactor pressure vessel steel, and to assess the response of embrittled reactor pressure vessels to postulated accident conditions. This work has been conducted a three length scales. At the engineering scale, 3D fracture mechanics capabilities have been developed to calculate stress intensities and fracture toughnesses, to perform a deterministic assessment of whether a crack would propagate at the location of an existing flaw. This capability has been demonstrated on several types of flaws in a generic reactor pressure vessel model. Models have been developed at themore » scale of fracture specimens to develop a capability to determine how irradiation affects the fracture toughness of material. Verification work has been performed on a previously-developed model to determine the sensitivity of the model to specimen geometry and size effects. The effects of irradiation on the parameters of this model has been investigated. At lower length scales, work has continued in an ongoing to understand how irradiation and thermal aging affect the microstructure and mechanical properties of reactor pressure vessel steel. Previously-developed atomistic kinetic monte carlo models have been further developed and benchmarked against experimental data. Initial work has been performed to develop models of nucleation in a phase field model. Additional modeling work has also been performed to improve the fundamental understanding of the formation mechanisms and stability of matrix defects caused.« less

  17. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  18. Nanopore Current Oscillations: Nonlinear Dynamics on the Nanoscale.

    PubMed

    Hyland, Brittany; Siwy, Zuzanna S; Martens, Craig C

    2015-05-21

    In this Letter, we describe theoretical modeling of an experimentally realized nanoscale system that exhibits the general universal behavior of a nonlinear dynamical system. In particular, we consider the description of voltage-induced current fluctuations through a single nanopore from the perspective of nonlinear dynamics. We briefly review the experimental system and its behavior observed and then present a simple phenomenological nonlinear model that reproduces the qualitative behavior of the experimental data. The model consists of a two-dimensional deterministic nonlinear bistable oscillator experiencing both dissipation and random noise. The multidimensionality of the model and the interplay between deterministic and stochastic forces are both required to obtain a qualitatively accurate description of the physical system.

  19. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  20. The Stochastic Multi-strain Dengue Model: Analysis of the Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2011-09-01

    Dengue dynamics is well known to be particularly complex with large fluctuations of disease incidences. An epidemic multi-strain model motivated by dengue fever epidemiology shows deterministic chaos in wide parameter regions. The addition of seasonal forcing, mimicking the vectorial dynamics, and a low import of infected individuals, which is realistic in the dynamics of infectious diseases epidemics show complex dynamics and qualitatively a good agreement between empirical DHF monitoring data and the obtained model simulation. The addition of noise can explain the fluctuations observed in the empirical data and for large enough population size, the stochastic system can be well described by the deterministic skeleton.

  1. Estimate Tsunami Flow Conditions and Large-Debris Tracks for the Design of Coastal Infrastructures along Coastlines of the U.S. Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Thomas, S.; Zhou, H.; Arcas, D.; Titov, V. V.

    2017-12-01

    The increasing potential tsunami hazards pose great challenges for infrastructures along the coastlines of the U.S. Pacific Northwest. Tsunami impact at a coastal site is usually assessed from deterministic scenarios based on 10,000 years of geological records in the Cascadia Subduction Zone (CSZ). Aside from these deterministic methods, the new ASCE 7-16 tsunami provisions provide engineering design criteria of tsunami loads on buildings based on a probabilistic approach. This work develops a site-specific model near Newport, OR using high-resolution grids, and compute tsunami inundation depth and velocities at the study site resulted from credible probabilistic and deterministic earthquake sources in the Cascadia Subduction Zone. Three Cascadia scenarios, two deterministic scenarios, XXL1 and L1, and a 2,500-yr probabilistic scenario compliant with the new ASCE 7-16 standard, are simulated using combination of a depth-averaged shallow water model for offshore propagation and a Boussinesq-type model for onshore inundation. We speculate on the methods and procedure to obtain the 2,500-year probabilistic scenario for Newport that is compliant with the ASCE 7-16 tsunami provisions. We provide details of model results, particularly the inundation depth and flow speed for a new building, which will also be designated as a tsunami vertical evacuation shelter, at Newport, Oregon. We show that the ASCE 7-16 consistent hazards are between those obtained from deterministic L1 and XXL1 scenarios, and the greatest impact on the building may come from later waves. As a further step, we utilize the inundation model results to numerically compute tracks of large vessels in the vicinity of the building site and estimate if these vessels will impact on the building site during the extreme XXL1 and ASCE 7-16 hazard-consistent scenarios. Two-step study is carried out first to study tracks of massless particles and then large vessels with assigned mass considering drag force, inertial force, ship grounding and mooring. The simulation results show that none of the large vessels will impact on the building site in all tested scenarios.

  2. The Stochastic Modelling of Endemic Diseases

    NASA Astrophysics Data System (ADS)

    Susvitasari, Kurnia; Siswantining, Titin

    2017-01-01

    A study about epidemic has been conducted since a long time ago, but genuine progress was hardly forthcoming until the end of the 19th century (Bailey, 1975). Both deterministic and stochastic models were used to describe these. Then, from 1927 to 1939 Kermack and McKendrick introduced a generality of this model, including some variables to consider such as rate of infection and recovery. The purpose of this project is to investigate the behaviour of the models when we set the basic reproduction number, R0. This quantity is defined as the expected number of contacts made by a typical infective to susceptibles in the population. According to the epidemic threshold theory, when R0 ≤ 1, minor epidemic occurs with probability one in both approaches, but when R0 > 1, the deterministic and stochastic models have different interpretation. In the deterministic approach, major epidemic occurs with probability one when R0 > 1 and predicts that the disease will settle down to an endemic equilibrium. Stochastic models, on the other hand, identify that the minor epidemic can possibly occur. If it does, then the epidemic will die out quickly. Moreover, if we let the population size be large and the major epidemic occurs, then it will take off and then reach the endemic level and move randomly around the deterministic’s equilibrium.

  3. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    NASA Astrophysics Data System (ADS)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  4. A model for seasonal phytoplankton blooms.

    PubMed

    Huppert, Amit; Blasius, Bernd; Olinky, Ronen; Stone, Lewi

    2005-10-07

    We analyse a generic bottom-up nutrient phytoplankton model to help understand the dynamics of seasonally recurring algae blooms. The deterministic model displays a wide spectrum of dynamical behaviours, from simple cyclical blooms which trigger annually, to irregular chaotic blooms in which both the time between outbreaks and their magnitudes are erratic. Unusually, despite the persistent seasonal forcing, it is extremely difficult to generate blooms that are both annually recurring and also chaotic or irregular (i.e. in amplitude) even though this characterizes many real time-series. Instead the model has a tendency to 'skip' with outbreaks often being suppressed from 1 year to the next. This behaviour is studied in detail and we develop analytical expressions to describe the model's flow in phase space, yielding insights into the mechanism of the bloom recurrence. We also discuss how modifications to the equations through the inclusion of appropriate functional forms can generate more realistic dynamics.

  5. Economic analysis of interventions to improve village chicken production in Myanmar.

    PubMed

    Henning, J; Morton, J; Pym, R; Hla, T; Sunn, K; Meers, J

    2013-07-01

    A cost-benefit analysis using deterministic and stochastic modelling was conducted to identify the net benefits for households that adopt (1) vaccination of individual birds against Newcastle disease (ND) or (2) improved management of chick rearing by providing coops for the protection of chicks from predation and chick starter feed inside a creep feeder to support chicks' nutrition in village chicken flocks in Myanmar. Partial budgeting was used to assess the additional costs and benefits associated with each of the two interventions tested relative to neither strategy. In the deterministic model, over the first 3 years after the introduction of the interventions, the cumulative sum of the net differences from neither strategy was 13,189Kyat for ND vaccination and 77,645Kyat for improved chick management (effective exchange rate in 2005: 1000Kyat=1$US). Both interventions were also profitable after discounting over a 10-year period; Net Present Values for ND vaccination and improved chick management were 30,791 and 167,825Kyat, respectively. The Benefit-Cost Ratio for ND vaccination was very high (28.8). This was lower for improved chick management, due to greater costs of the intervention, but still favourable at 4.7. Using both interventions concurrently yielded a Net Present Value of 470,543Kyat and a Benefit-Cost Ratio of 11.2 over the 10-year period in the deterministic model. Using the stochastic model, for the first 3 years following the introduction of the interventions, the mean cumulative sums of the net difference were similar to those values obtained from the deterministic model. Sensitivity analysis indicated that the cumulative net differences were strongly influenced by grower bird sale income, particularly under improved chick management. The effects of the strategies on odds of households selling and consuming birds after 7 months, and numbers of birds being sold or consumed after this period also influenced profitability. Cost variations for equipment used under improved chick management were not markedly associated with profitability. Net Present Values and Benefit-Cost Ratios discounted over a 10-year period were also similar to the deterministic model when mean values obtained through stochastic modelling were used. In summary, the study showed that ND vaccination and improved chick management can improve the viability and profitability of village chicken production in Myanmar. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  7. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  8. Probabilistic description of infant head kinematics in abusive head trauma.

    PubMed

    Lintern, T O; Nash, M P; Kelly, P; Bloomfield, F H; Taberner, A J; Nielsen, P M F

    2017-12-01

    Abusive head trauma (AHT) is a potentially fatal result of child abuse, but the mechanisms by which injury occur are often unclear. To investigate the contention that shaking alone can elicit the injuries observed, effective computational models are necessary. The aim of this study was to develop a probabilistic model describing infant head kinematics in AHT. A deterministic model incorporating an infant's mechanical properties, subjected to different shaking motions, was developed in OpenSim. A Monte Carlo analysis was used to simulate the range of infant kinematics produced as a result of varying both the mechanical properties and the type of shaking motions. By excluding physically unrealistic shaking motions, worst-case shaking scenarios were simulated and compared to existing injury criteria for a newborn, a 4.5 month-old, and a 12 month-old infant. In none of the three cases were head kinematics observed to exceed previously-estimated subdural haemorrhage injury thresholds. The results of this study provide no biomechanical evidence to demonstrate how shaking by a human alone can cause the injuries observed in AHT, suggesting either that additional factors, such as impact, are required, or that the current estimates of injury thresholds are incorrect.

  9. Variational principles for stochastic fluid dynamics

    PubMed Central

    Holm, Darryl D.

    2015-01-01

    This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083

  10. Panel summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.

    1987-04-01

    The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less

  11. Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran

    NASA Astrophysics Data System (ADS)

    Ney, B.; Askari, M.

    2009-04-01

    Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran Behnoosh Neyestani , Mina Askari Students of Science and Research University,Iran. Seismic Hazard Assessment has been done for Shahdad city in this study , and four maps (Kerman-Bam-Nakhil Ab-Allah Abad) has been prepared to indicate the Deterministic estimate of Peak Ground Acceleration (PGA) in this area. Deterministic Seismic Hazard Assessment has been preformed for a region in eastern Iran (Shahdad) based on the available geological, seismological and geophysical information and seismic zoning map of region has been constructed. For this assessment first Seimotectonic map of study region in a radius of 100km is prepared using geological maps, distribution of historical and instrumental earthquake data and focal mechanism solutions it is used as the base map for delineation of potential seismic sources. After that minimum distance, for every seismic sources until site (Shahdad) and maximum magnitude for each source have been determined. In Shahdad ,according to results, peak ground acceleration using the Yoshimitsu Fukushima &Teiji Tanaka'1990 attenuation relationship is estimated to be 0.58 g, that is related to the movement of nayband fault with distance 2.4km of the site and maximum magnitude Ms=7.5.

  12. Markov Logic Networks in the Analysis of Genetic Data

    PubMed Central

    Sakhanenko, Nikita A.

    2010-01-01

    Abstract Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of influences of each gene and often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying mechanisms. Modeling approaches from the artificial intelligence (AI) field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we can replicate the results of traditional statistical methods, but we also show that we are able to go beyond finding independent markers linked to a phenotype by using joint inference without an independence assumption. The method is applied to genetic data on yeast sporulation, a complex phenotype with gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method identifies four loci with smaller effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics. PMID:20958249

  13. Demographic noise can reverse the direction of deterministic selection

    PubMed Central

    Constable, George W. A.; Rogers, Tim; McKane, Alan J.; Tarnita, Corina E.

    2016-01-01

    Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to r−K theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085

  14. Stochastic oscillations in models of epidemics on a network of cities

    NASA Astrophysics Data System (ADS)

    Rozhnova, G.; Nunes, A.; McKane, A. J.

    2011-11-01

    We carry out an analytic investigation of stochastic oscillations in a susceptible-infected-recovered model of disease spread on a network of n cities. In the model a fraction fjk of individuals from city k commute to city j, where they may infect, or be infected by, others. Starting from a continuous-time Markov description of the model the deterministic equations, which are valid in the limit when the population of each city is infinite, are recovered. The stochastic fluctuations about the fixed point of these equations are derived by use of the van Kampen system-size expansion. The fixed point structure of the deterministic equations is remarkably simple: A unique nontrivial fixed point always exists and has the feature that the fraction of susceptible, infected, and recovered individuals is the same for each city irrespective of its size. We find that the stochastic fluctuations have an analogously simple dynamics: All oscillations have a single frequency, equal to that found in the one-city case. We interpret this phenomenon in terms of the properties of the spectrum of the matrix of the linear approximation of the deterministic equations at the fixed point.

  15. Chaos emerging in soil failure patterns observed during tillage: Normalized deterministic nonlinear prediction (NDNP) and its application.

    PubMed

    Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V

    2017-03-01

    Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.

  16. Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.

  17. Non-Deterministic Dynamic Instability of Composite Shells

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2004-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.

  18. Symmetry aspects in emergent quantum mechanics

    NASA Astrophysics Data System (ADS)

    Elze, Hans-Thomas

    2009-06-01

    We discuss an explicit realization of the dissipative dynamics anticipated in the proof of 't Hooft's existence theorem, which states that 'For any quantum system there exists at least one deterministic model that reproduces all its dynamics after prequantization'. - There is an energy-parity symmetry hidden in the Liouville equation, which mimics the Kaplan-Sundrum protective symmetry for the cosmological constant. This symmetry may be broken by the coarse-graining inherent in physics at scales much larger than the Planck length. We correspondingly modify classical ensemble theory by incorporating dissipative fluctuations (information loss) - which are caused by discrete spacetime continually 'measuring' matter. In this way, aspects of quantum mechanics, such as the von Neumann equation, including a Lindblad term, arise dynamically and expectations of observables agree with the Born rule. However, the resulting quantum coherence is accompanied by an intrinsic decoherence and continuous localization mechanism. Our proposal leads towards a theory that is linear and local at the quantum mechanical level, but the relation to the underlying classical degrees of freedom is nonlocal.

  19. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks

    PubMed Central

    Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.

    2015-01-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406

  20. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.

    PubMed

    Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M

    2015-09-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.

  1. From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities

    NASA Astrophysics Data System (ADS)

    Kunjwal, Ravi; Spekkens, Robert W.

    2018-05-01

    The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.

  2. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  3. Ion pump as Brownian motor: theory of electroconformational coupling and proof of ratchet mechanism for Na,K-ATPase action

    NASA Astrophysics Data System (ADS)

    Tsong, Tian Yow; Chang, Cheng-Hung

    2003-04-01

    This article reviews some concepts of the Brownian Ratchet which are relevant to our discussion of mechanisms of action of Na,K-ATPase, a universal ion pump and an elemental motor protein of the biological cell. Under wide ranges of ionic compositions it can hydrolyze an ATP and use the γ-phosphorous bond energy of ATP to pump 3 Na + out of, and 2 K + into the cell, both being uphill transport. During the ATP-dependent pump cycle, the enzyme oscillates between E1 and E2 states. Our experiment replaces ATP with externally applied electric field of various waveforms, amplitudes, and frequencies. The field enforced-oscillation, or fluctuation of E1 and E2 states enables the enzyme to harvest energy from the applied field and convert it to the chemical gradient energy of cations. A theory of electroconformational coupling (TEC), which embodies all the essential features of the Brownian Ratchet, successfully simulates these experimental results. Our analysis based on a four-state TEC model indicates that the equilibrium and the rate constants of the transport system define the frequency and the amplitude of the field for the optimal activation. Waveform, frequency, and amplitude are three elements of signal. Thus, electric signal of the ion pump is found by TEC analysis of the experimental data. Electric noise (white) superimposed on an electric signal changes the pump efficiency and produces effects similar to the stochastic resonance reported in other biological systems. The TEC concept is compared with the most commonly used Michaelis-Menten enzyme mechanism (MME) for similarities and differences. Both MME and TEC are catalytic wheels, which recycle the catalyst in each turnover. However, a MME can only catalyze reaction of descending free energy while a TEC enzyme can catalyze reaction of ascending free energy by harvesting needed energy from an off-equilibrium electric noise. The TEC mechanism is shown to be applicable to other biological motors and engines, as well. Deterministic and non-deterministic noise is examined in reference to future extension of the TEC model for biological transduction of free energy.

  4. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  5. Rare event computation in deterministic chaotic systems using genealogical particle analysis

    NASA Astrophysics Data System (ADS)

    Wouters, J.; Bouchet, F.

    2016-09-01

    In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein-Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function.

  6. Pyrotechnic modeling for the NSI and pin puller

    NASA Technical Reports Server (NTRS)

    Powers, Joseph M.; Gonthier, Keith A.

    1993-01-01

    A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.

  7. Deterministic multi-zone ice accretion modeling

    NASA Technical Reports Server (NTRS)

    Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael

    1991-01-01

    The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.

  8. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  9. Parameter estimation in spiking neural networks: a reverse-engineering approach.

    PubMed

    Rostro-Gonzalez, H; Cessac, B; Vieville, T

    2012-04-01

    This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.

  10. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    PubMed

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  11. Comparing physically-based and statistical landslide susceptibility model outputs - a case study from Lower Austria

    NASA Astrophysics Data System (ADS)

    Canli, Ekrem; Thiebes, Benni; Petschko, Helene; Glade, Thomas

    2015-04-01

    By now there is a broad consensus that due to human-induced global change the frequency and magnitude of heavy precipitation events is expected to increase in certain parts of the world. Given the fact, that rainfall serves as the most common triggering agent for landslide initiation, also an increased landside activity can be expected there. Landslide occurrence is a globally spread phenomenon that clearly needs to be handled. The present and well known problems in modelling landslide susceptibility and hazard give uncertain results in the prediction. This includes the lack of a universal applicable modelling solution for adequately assessing landslide susceptibility (which can be seen as the relative indication of the spatial probability of landslide initiation). Generally speaking, there are three major approaches for performing landslide susceptibility analysis: heuristic, statistical and deterministic models, all with different assumptions, its distinctive data requirements and differently interpretable outcomes. Still, detailed comparison of resulting landslide susceptibility maps are rare. In this presentation, the susceptibility modelling outputs of a deterministic model (Stability INdex MAPping - SINMAP) and a statistical modelling approach (generalized additive model - GAM) are compared. SINMAP is an infinite slope stability model which requires parameterization of soil mechanical parameters. Modelling with the generalized additive model, which represents a non-linear extension of a generalized linear model, requires a high quality landslide inventory that serves as the dependent variable in the statistical approach. Both methods rely on topographical data derived from the DTM. The comparison has been carried out in a study area located in the district of Waidhofen/Ybbs in Lower Austria. For the whole district (ca. 132 km²), 1063 landslides have been mapped and partially used within the analysis and the validation of the model outputs. The respective susceptibility maps have been reclassified to contain three susceptibility classes each. The comparison of the susceptibility maps was performed on a grid cell basis. A match of the maps was observed for grid cells located in the same susceptibility class. In contrast, a mismatch or deviation was observed for locations with different assigned susceptibility classes (up to two classes' difference). Although the modelling approaches differ significantly, more than 70% of the pixels reveal a match in the same susceptibility class. A mismatch by two classes' difference occurred in less than 2% of all pixels. Although the result looks promising and strengthens the confidence in the susceptibility zonation for this area, some of the general drawbacks related to the respective approaches still have to be addressed in further detail. Future work is heading towards an integration of probabilistic aspects into deterministic modelling.

  12. Developing a stochastic conflict resolution model for urban runoff quality management: Application of info-gap and bargaining theories

    NASA Astrophysics Data System (ADS)

    Ghodsi, Seyed Hamed; Kerachian, Reza; Estalaki, Siamak Malakpour; Nikoo, Mohammad Reza; Zahmatkesh, Zahra

    2016-02-01

    In this paper, two deterministic and stochastic multilateral, multi-issue, non-cooperative bargaining methodologies are proposed for urban runoff quality management. In the proposed methodologies, a calibrated Storm Water Management Model (SWMM) is used to simulate stormwater runoff quantity and quality for different urban stormwater runoff management scenarios, which have been defined considering several Low Impact Development (LID) techniques. In the deterministic methodology, the best management scenario, representing location and area of LID controls, is identified using the bargaining model. In the stochastic methodology, uncertainties of some key parameters of SWMM are analyzed using the info-gap theory. For each water quality management scenario, robustness and opportuneness criteria are determined based on utility functions of different stakeholders. Then, to find the best solution, the bargaining model is performed considering a combination of robustness and opportuneness criteria for each scenario based on utility function of each stakeholder. The results of applying the proposed methodology in the Velenjak urban watershed located in the northeastern part of Tehran, the capital city of Iran, illustrate its practical utility for conflict resolution in urban water quantity and quality management. It is shown that the solution obtained using the deterministic model cannot outperform the result of the stochastic model considering the robustness and opportuneness criteria. Therefore, it can be concluded that the stochastic model, which incorporates the main uncertainties, could provide more reliable results.

  13. Deterministic Stress Modeling of Hot Gas Segregation in a Turbine

    NASA Technical Reports Server (NTRS)

    Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger

    1998-01-01

    Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.

  14. A nonlinear dynamical analogue model of geomagnetic activity

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Baker, D. N.; Roberts, D. A.; Fairfield, D. H.; Buechner, J.

    1992-01-01

    Consideration is given to the solar wind-magnetosphere interaction within the framework of deterministic nonlinear dynamics. An earlier dripping faucet analog model of the low-dimensional solar wind-magnetosphere system is reviewed, and a plasma physical counterpart to that model is constructed. A Faraday loop in the magnetotail is considered, and the relationship of electric potentials on the loop to changes in the magnetic flux threading the loop is developed. This approach leads to a model of geomagnetic activity which is similar to the earlier mechanical model but described in terms of the geometry and plasma contents of the magnetotail. The model is characterized as an elementary time-dependent global convection model. The convection evolves within a magnetotail shape that varies in a prescribed manner in response to the dynamical evolution of the convection. The result is a nonlinear model capable of exhibiting a transition from regular to chaotic loading and unloading. The model's behavior under steady loading and also some elementary forms of time-dependent loading is discussed.

  15. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  16. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    PubMed

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  17. Stochastic von Bertalanffy models, with applications to fish recruitment.

    PubMed

    Lv, Qiming; Pitchford, Jonathan W

    2007-02-21

    We consider three individual-based models describing growth in stochastic environments. Stochastic differential equations (SDEs) with identical von Bertalanffy deterministic parts are formulated, with a stochastic term which decreases, remains constant, or increases with organism size, respectively. Probability density functions for hitting times are evaluated in the context of fish growth and mortality. Solving the hitting time problem analytically or numerically shows that stochasticity can have a large positive impact on fish recruitment probability. It is also demonstrated that the observed mean growth rate of surviving individuals always exceeds the mean population growth rate, which itself exceeds the growth rate of the equivalent deterministic model. The consequences of these results in more general biological situations are discussed.

  18. Deterministic SLIR model for tuberculosis disease mapping

    NASA Astrophysics Data System (ADS)

    Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat

    2017-11-01

    Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.

  19. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  20. Optimizing Chemical Reactions with Deep Reinforcement Learning.

    PubMed

    Zhou, Zhenpeng; Li, Xiaocheng; Zare, Richard N

    2017-12-27

    Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability.

  1. Fractal Parameter Space of Lorenz-like Attractors: A Hierarchical Approach

    NASA Astrophysics Data System (ADS)

    Xing, Tingli; Wojcik, Jeremy; Zaks, Michael A.; Shilnikov, Andrey

    2014-12-01

    Using bi-parametric sweeping based on symbolic representation we reveal self-similar fractal structures induced by hetero- and homoclinic bifurcations of saddle singularities in the parameter space of two systems with deterministic chaos. We start with the system displaying a few homoclinic bifurcations of higher codimension: resonant saddle, orbitflip and inclination switch that all can give rise to the onset of the Lorenz-type attractor. It discloses a universal unfolding pattern in the case of systems of autonomous ordinary differential equations possessing two-fold symmetry or "Z2-systems" with the characteristic separatrix butterfly. The second system is the classic Lorenz model of 1963, originated in fluid mechanics.

  2. Used Nuclear Fuel Loading and Structural Performance Under Normal Conditions of Transport- Demonstration of Approach and Results on Used Fuel Performance Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adkins, Harold; Geelhood, Ken; Koeppel, Brian

    2013-09-30

    This document addresses Oak Ridge National Laboratory milestone M2FT-13OR0822015 Demonstration of Approach and Results on Used Nuclear Fuel Performance Characterization. This report provides results of the initial demonstration of the modeling capability developed to perform preliminary deterministic evaluations of moderate-to-high burnup used nuclear fuel (UNF) mechanical performance under normal conditions of storage (NCS) and normal conditions of transport (NCT) conditions. This report also provides results from the sensitivity studies that have been performed. Finally, discussion on the long-term goals and objectives of this initiative are provided.

  3. Probabilistic evaluation of SSME structural components

    NASA Astrophysics Data System (ADS)

    Rajagopal, K. R.; Newell, J. F.; Ho, H.

    1991-05-01

    The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.

  4. Observations and modeling of the effects of waves and rotors on submeso and turbulence variability within the stable boundary layer over central Pennsylvania

    NASA Astrophysics Data System (ADS)

    Suarez Mullins, Astrid

    Terrain-induced gravity waves and rotor circulations have been hypothesized to enhance the generation of submeso motions (i.e., nonstationary shear events with spatial and temporal scales greater than the turbulence scale and smaller than the meso-gamma scale) and to modulate low-level intermittency in the stable boundary layer (SBL). Intermittent turbulence, generated by submeso motions and/or the waves, can affect the atmospheric transport and dispersion of pollutants and hazardous materials. Thus, the study of these motions and the mechanisms through which they impact the weakly to very stable SBL is crucial for improving air quality modeling and hazard predictions. In this thesis, the effects of waves and rotor circulations on submeso and turbulence variability within the SBL is investigated over the moderate terrain of central Pennsylvania using special observations from a network deployed at Rock Springs, PA and high-resolution Weather Research and Forecasting (WRF) model forecasts. The investigation of waves and rotors over central PA is important because 1) the moderate topography of this region is common to most of the eastern US and thus the knowledge acquired from this study can be of significance to a large population, 2) there have been little evidence of complex wave structures and rotors reported for this region, and 3) little is known about the waves and rotors generated by smaller and more moderate topographies. Six case studies exhibiting an array of wave and rotor structures are analyzed. Observational evidence of the presence of complex wave structures, resembling nonstationary trapped gravity waves and downslope windstorms, and complex rotor circulations, resembling trapped and jump-type rotors, is presented. These motions and the mechanisms through which they modulate the SBL are further investigated using high-resolution WRF forecasts. First, the efficacy of the 0.444-km horizontal grid spacing WRF model to reproduce submeso and meso-gamma motions, generated by waves and rotors and hypothesized to impact the SBL, is investigated using a new wavelet-based verification methodology for assessing non-deterministic model skill in the submeso and meso-gamma range to complement standard deterministic measures. This technique allows the verification and/or intercomparison of any two nonstationary stochastic systems without many of the limitations of typical wavelet-based verification approaches (e.g., selection of noise models, testing for significance, etc.). Through this analysis, it is shown that the WRF model largely underestimates the number of small amplitude fluctuations in the small submeso range, as expected; and it overestimates the number of small amplitude fluctuations in the meso-gamma range, generally resulting in forecasts that are too smooth. Investigation of the variability for different initialization strategies shows that deterministic wind speed predictions are less sensitive to the choice of initialization strategy than temperature forecasts. Similarly, investigation of the variability for various planetary boundary layer (PBL) parameterizations reveals that turbulent kinetic energy (TKE)-based schemes have an advantage over the non-local schemes for non-deterministic motions. The larger spread in the verification scores for various PBL parameterizations than initialization strategies indicates that PBL parameterization may play a larger role modulating the variability of non-deterministic motions in the SBL for these cases. These results confirm previous findings that have shown WRF to have limited skill forecasting submeso variability for periods greater than ~20 min. The limited skill of the WRF at these scales in these cases is related to the systematic underestimation of the amplitude of observed fluctuations. These results are implemented in the model design and configuration for the investigation of nonstationary waves and rotor structures modulating submeso and mesogamma motions and the SBL. Observations and WRF forecasts of two wave cases characterized by nonstationary waves and rotors are investigated to show the WRF model to have reasonable accuracy forecasting low-level temperature and wind speed in the SBL and to qualitatively produce rotors, similar to those observed, as well as some of the mechanisms modulating their development and evolution. Finally, observations and high-resolution WRF forecasts under different environmental conditions using various initialization strategies are used to investigate the impact of nonlinear gravity waves and rotor structures on the generation of intermittent turbulence and valley transport in the SBL. Evidence of the presence of elevated regions of TKE generated by the complex waves and rotors is presented and investigated using an additional four case studies, exhibiting two synoptic flow regimes and different wave and rotor structures. Throughout this thesis, terrain-induced gravity waves and rotors in the SBL are shown to synergistically interact with the surface cold pool and to enhance low-level turbulence intermittency through the development of submeso and meso-gamma motions. These motions are shown to be an important source of uncertainty for the atmospheric transport and dispersion of pollutants and hazardous materials under very stable conditions. (Abstract shortened by ProQuest.).

  5. Interactive Reliability Model for Whisker-toughened Ceramics

    NASA Technical Reports Server (NTRS)

    Palko, Joseph L.

    1993-01-01

    Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.

  6. Nonclassical point of view of the Brownian motion generation via fractional deterministic model

    NASA Astrophysics Data System (ADS)

    Gilardi-Velázquez, H. E.; Campos-Cantón, E.

    In this paper, we present a dynamical system based on the Langevin equation without stochastic term and using fractional derivatives that exhibit properties of Brownian motion, i.e. a deterministic model to generate Brownian motion is proposed. The stochastic process is replaced by considering an additional degree of freedom in the second-order Langevin equation. Thus, it is transformed into a system of three first-order linear differential equations, additionally α-fractional derivative are considered which allow us to obtain better statistical properties. Switching surfaces are established as a part of fluctuating acceleration. The final system of three α-order linear differential equations does not contain a stochastic term, so the system generates motion in a deterministic way. Nevertheless, from the time series analysis, we found that the behavior of the system exhibits statistics properties of Brownian motion, such as, a linear growth in time of mean square displacement, a Gaussian distribution. Furthermore, we use the detrended fluctuation analysis to prove the Brownian character of this motion.

  7. Developing deterioration models for Wyoming bridges.

    DOT National Transportation Integrated Search

    2016-05-01

    Deterioration models for the Wyoming Bridge Inventory were developed using both stochastic and deterministic models. : The selection of explanatory variables is investigated and a new method using LASSO regression to eliminate human bias : in explana...

  8. Nonlinear modeling of chaotic time series: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casdagli, M.; Eubank, S.; Farmer, J.D.

    1990-01-01

    We review recent developments in the modeling and prediction of nonlinear time series. In some cases apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifyingmore » and quantifying low-dimensional chaotic behavior. During the past few years methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics and human speech. 162 refs., 13 figs.« less

  9. Short-range solar radiation forecasts over Sweden

    NASA Astrophysics Data System (ADS)

    Landelius, Tomas; Lindskog, Magnus; Körnich, Heiner; Andersson, Sandra

    2018-04-01

    In this article the performance for short-range solar radiation forecasts by the global deterministic and ensemble models from the European Centre for Medium-Range Weather Forecasts (ECMWF) is compared with an ensemble of the regional mesoscale model HARMONIE-AROME used by the national meteorological services in Sweden, Norway and Finland. Note however that only the control members and the ensemble means are included in the comparison. The models resolution differs considerably with 18 km for the ECMWF ensemble, 9 km for the ECMWF deterministic model, and 2.5 km for the HARMONIE-AROME ensemble. The models share the same radiation code. It turns out that they all underestimate systematically the Direct Normal Irradiance (DNI) for clear-sky conditions. Except for this shortcoming, the HARMONIE-AROME ensemble model shows the best agreement with the distribution of observed Global Horizontal Irradiance (GHI) and DNI values. During mid-day the HARMONIE-AROME ensemble mean performs best. The control member of the HARMONIE-AROME ensemble also scores better than the global deterministic ECMWF model. This is an interesting result since mesoscale models have so far not shown good results when compared to the ECMWF models. Three days with clear, mixed and cloudy skies are used to illustrate the possible added value of a probabilistic forecast. It is shown that in these cases the mesoscale ensemble could provide decision support to a grid operator in terms of forecasts of both the amount of solar power and its probabilities.

  10. Ensemble assimilation of ARGO temperature profile, sea surface temperature and Altimetric satellite data into an eddy permitting primitive equation model of the North Atlantic ocean

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2015-04-01

    Sea surface height, sea surface temperature and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. 60 ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. Incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with observations used in the assimilation experiments and independent observations, which goes further than most previous studies and constitutes one of the original points of this paper. Regarding the deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations in order to diagnose the ensemble distribution properties in a deterministic way. Regarding the probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analysed jointly. The consistency and complementarity between both validations are highlighted. High reliable situations, in which the RMS error and the CRPS give the same information, are identified for the first time in this paper.

  11. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  12. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  13. Calculating the Malliavin derivative of some stochastic mechanics problems

    PubMed Central

    Hauseux, Paul; Hale, Jack S.

    2017-01-01

    The Malliavin calculus is an extension of the classical calculus of variations from deterministic functions to stochastic processes. In this paper we aim to show in a practical and didactic way how to calculate the Malliavin derivative, the derivative of the expectation of a quantity of interest of a model with respect to its underlying stochastic parameters, for four problems found in mechanics. The non-intrusive approach uses the Malliavin Weight Sampling (MWS) method in conjunction with a standard Monte Carlo method. The models are expressed as ODEs or PDEs and discretised using the finite difference or finite element methods. Specifically, we consider stochastic extensions of; a 1D Kelvin-Voigt viscoelastic model discretised with finite differences, a 1D linear elastic bar, a hyperelastic bar undergoing buckling, and incompressible Navier-Stokes flow around a cylinder, all discretised with finite elements. A further contribution of this paper is an extension of the MWS method to the more difficult case of non-Gaussian random variables and the calculation of second-order derivatives. We provide open-source code for the numerical examples in this paper. PMID:29261776

  14. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  15. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    USGS Publications Warehouse

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  16. Cooperation of Deterministic Dynamics and Random Noise in Production of Complex Syntactical Avian Song Sequences: A Neural Network Model

    PubMed Central

    Yamashita, Yuichi; Okumura, Tetsu; Okanoya, Kazuo; Tani, Jun

    2011-01-01

    How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC), a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf) projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN) that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf–HVC interaction. PMID:21559065

  17. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Predicting reasoning from memory.

    PubMed

    Heit, Evan; Hayes, Brett K

    2011-02-01

    In an effort to assess the relations between reasoning and memory, in 8 experiments, the authors examined how well responses on an inductive reasoning task are predicted from responses on a recognition memory task for the same picture stimuli. Across several experimental manipulations, such as varying study time, presentation frequency, and the presence of stimuli from other categories, there was a high correlation between reasoning and memory responses (average r = .87), and these manipulations showed similar effects on the 2 tasks. The results point to common mechanisms underlying inductive reasoning and recognition memory abilities. A mathematical model, GEN-EX (generalization from examples), derived from exemplar models of categorization, is presented, which predicts both reasoning and memory responses from pairwise similarities among the stimuli, allowing for additional influences of subtyping and deterministic responding. (c) 2010 APA, all rights reserved.

  19. Experimental quantum computing without entanglement.

    PubMed

    Lanyon, B P; Barbieri, M; Almeida, M P; White, A G

    2008-11-14

    Deterministic quantum computation with one pure qubit (DQC1) is an efficient model of computation that uses highly mixed states. Unlike pure-state models, its power is not derived from the generation of a large amount of entanglement. Instead it has been proposed that other nonclassical correlations are responsible for the computational speedup, and that these can be captured by the quantum discord. In this Letter we implement DQC1 in an all-optical architecture, and experimentally observe the generated correlations. We find no entanglement, but large amounts of quantum discord-except in three cases where an efficient classical simulation is always possible. Our results show that even fully separable, highly mixed, states can contain intrinsically quantum mechanical correlations and that these could offer a valuable resource for quantum information technologies.

  20. Real-time flood forecasting

    USGS Publications Warehouse

    Lai, C.; Tsay, T.-K.; Chien, C.-H.; Wu, I.-L.

    2009-01-01

    Researchers at the Hydroinformatic Research and Development Team (HIRDT) of the National Taiwan University undertook a project to create a real time flood forecasting model, with an aim to predict the current in the Tamsui River Basin. The model was designed based on deterministic approach with mathematic modeling of complex phenomenon, and specific parameter values operated to produce a discrete result. The project also devised a rainfall-stage model that relates the rate of rainfall upland directly to the change of the state of river, and is further related to another typhoon-rainfall model. The geographic information system (GIS) data, based on precise contour model of the terrain, estimate the regions that were perilous to flooding. The HIRDT, in response to the project's progress, also devoted their application of a deterministic model to unsteady flow of thermodynamics to help predict river authorities issue timely warnings and take other emergency measures.

  1. Fuzzy linear model for production optimization of mining systems with multiple entities

    NASA Astrophysics Data System (ADS)

    Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar

    2011-12-01

    Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.

  2. Spatio-temporal modelling of rainfall in the Murray-Darling Basin

    NASA Astrophysics Data System (ADS)

    Nowak, Gen; Welsh, A. H.; O'Neill, T. J.; Feng, Lingbing

    2018-02-01

    The Murray-Darling Basin (MDB) is a large geographical region in southeastern Australia that contains many rivers and creeks, including Australia's three longest rivers, the Murray, the Murrumbidgee and the Darling. Understanding rainfall patterns in the MDB is very important due to the significant impact major events such as droughts and floods have on agricultural and resource productivity. We propose a model for modelling a set of monthly rainfall data obtained from stations in the MDB and for producing predictions in both the spatial and temporal dimensions. The model is a hierarchical spatio-temporal model fitted to geographical data that utilises both deterministic and data-derived components. Specifically, rainfall data at a given location are modelled as a linear combination of these deterministic and data-derived components. A key advantage of the model is that it is fitted in a step-by-step fashion, enabling appropriate empirical choices to be made at each step.

  3. Individualism in plant populations: using stochastic differential equations to model individual neighbourhood-dependent plant growth.

    PubMed

    Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W

    2008-08-01

    We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.

  4. Stochastic Watershed Models for Risk Based Decision Making

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  5. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  6. A stochastic chemostat model with an inhibitor and noise independent of population sizes

    NASA Astrophysics Data System (ADS)

    Sun, Shulin; Zhang, Xiaolu

    2018-02-01

    In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.

  7. A Deterministic Interfacial Cyclic Oxidation Spalling Model. Part 1; Model Development and Parametric Response

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2002-01-01

    An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.

  8. Space, Elastic And Impeding: Two Qualities Of Space Define Energy Which Defines Elementary Particles and Their Interactions

    NASA Astrophysics Data System (ADS)

    Salvatore, Gerard Micheal

    The conceptual foundations for a deterministic quantum mechanics are presented with the Socratic method. The theory is attacked and weaknesses elucidated. These are compared against those of convention. Directions for future research are proposed.

  9. Population density equations for stochastic processes with memory kernels

    NASA Astrophysics Data System (ADS)

    Lai, Yi Ming; de Kamps, Marc

    2017-06-01

    We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

  10. Kinetics of Thermal Unimolecular Decomposition of Acetic Anhydride: An Integrated Deterministic and Stochastic Model.

    PubMed

    Mai, Tam V-T; Duong, Minh V; Nguyen, Hieu T; Lin, Kuang C; Huynh, Lam K

    2017-04-27

    An integrated deterministic and stochastic model within the master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) framework was first used to characterize temperature- and pressure-dependent behaviors of thermal decomposition of acetic anhydride in a wide range of conditions (i.e., 300-1500 K and 0.001-100 atm). Particularly, using potential energy surface and molecular properties obtained from high-level electronic structure calculations at CCSD(T)/CBS, macroscopic thermodynamic properties and rate coefficients of the title reaction were derived with corrections for hindered internal rotation and tunneling treatments. Being in excellent agreement with the scattered experimental data, the results from deterministic and stochastic frameworks confirmed and complemented each other to reveal that the main decomposition pathway proceeds via a 6-membered-ring transition state with the 0 K barrier of 35.2 kcal·mol -1 . This observation was further understood and confirmed by the sensitivity analysis on the time-resolved species profiles and the derived rate coefficients with respect to the ab initio barriers. Such an agreement suggests the integrated model can be confidently used for a wide range of conditions as a powerful postfacto and predictive tool in detailed chemical kinetic modeling and simulation for the title reaction and thus can be extended to complex chemical reactions.

  11. Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.

    PubMed

    Marino, Dale J; Starr, Thomas B

    2007-12-01

    A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.

  12. Chaotic sources of noise in machine acoustics

    NASA Astrophysics Data System (ADS)

    Moon, F. C., Prof.; Broschart, Dipl.-Ing. T.

    1994-05-01

    In this paper a model is posited for deterministic, random-like noise in machines with sliding rigid parts impacting linear continuous machine structures. Such problems occur in gear transmission systems. A mathematical model is proposed to explain the random-like structure-borne and air-borne noise from such systems when the input is a periodic deterministic excitation of the quasi-rigid impacting parts. An experimental study is presented which supports the model. A thin circular plate is impacted by a chaotically vibrating mass excited by a sinusoidal moving base. The results suggest that the plate vibrations might be predicted by replacing the chaotic vibrating mass with a probabilistic forcing function. Prechaotic vibrations of the impacting mass show classical period doubling phenomena.

  13. Nonlinear Boltzmann equation for the homogeneous isotropic case: Some improvements to deterministic methods and applications to relaxation towards local equilibrium

    NASA Astrophysics Data System (ADS)

    Asinari, P.

    2011-03-01

    Boltzmann equation is one the most powerful paradigms for explaining transport phenomena in fluids. Since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles, vacuum technology requirements and nowadays, micro-electro-mechanical systems (MEMs). Because of the intrinsic mathematical complexity of the problem, Boltzmann himself started his work by considering first the case when the distribution function does not depend on space (homogeneous case), but only on time and the magnitude of the molecular velocity (isotropic collisional integral). The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics, a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called socio-dynamics or sociophysics. The present work [1] aims to improve the deterministic method for solving homogenous isotropic Boltzmann equation proposed by Aristov [2] by two ideas: (a) the homogeneous isotropic problem is reformulated first in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium).

  14. Dispersal-Based Microbial Community Assembly Decreases Biogeochemical Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Stegen, James C.

    Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of microbial communities over biogeochemical function. We used an ecological simulation model to evaluate this hypothesis, defining biogeochemical function generically to represent any biogeochemical reaction of interest. We assembled receiving communities under different levels of dispersal from a source community that was assembled purely by selection. The dispersal scenarios ranged from no dispersal (i.e., selection-only) to dispersal ratesmore » high enough to overwhelm selection (i.e., homogenizing dispersal). We used an aggregate measure of community fitness to infer a given community’s biogeochemical function relative to other communities. We also used ecological null models to further link the relative influence of deterministic assembly to function. We found that increasing rates of dispersal decrease biogeochemical function by increasing the proportion of maladapted taxa in a local community. Niche breadth was also a key determinant of biogeochemical function, suggesting a tradeoff between the function of generalist and specialist species. Finally, we show that microbial assembly processes exert greater influence over biogeochemical function when there is variation in the relative contributions of dispersal and selection among communities. Taken together, our results highlight the influence of spatial processes on biogeochemical function and indicate the need to account for such effects in models that aim to predict biogeochemical function under future environmental scenarios.« less

  15. Dispersal-Based Microbial Community Assembly Decreases Biogeochemical Function

    DOE PAGES

    Graham, Emily B.; Stegen, James C.

    2017-11-01

    Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of microbial communities over biogeochemical function. We used an ecological simulation model to evaluate this hypothesis, defining biogeochemical function generically to represent any biogeochemical reaction of interest. We assembled receiving communities under different levels of dispersal from a source community that was assembled purely by selection. The dispersal scenarios ranged from no dispersal (i.e., selection-only) to dispersal ratesmore » high enough to overwhelm selection (i.e., homogenizing dispersal). We used an aggregate measure of community fitness to infer a given community’s biogeochemical function relative to other communities. We also used ecological null models to further link the relative influence of deterministic assembly to function. We found that increasing rates of dispersal decrease biogeochemical function by increasing the proportion of maladapted taxa in a local community. Niche breadth was also a key determinant of biogeochemical function, suggesting a tradeoff between the function of generalist and specialist species. Finally, we show that microbial assembly processes exert greater influence over biogeochemical function when there is variation in the relative contributions of dispersal and selection among communities. Taken together, our results highlight the influence of spatial processes on biogeochemical function and indicate the need to account for such effects in models that aim to predict biogeochemical function under future environmental scenarios.« less

  16. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  17. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally, the critical rainfall threshold of the slope can be obtained by the coupled analysis of rainfall, infiltration, seepage, and slope stability. Taking the slope located at 50k+650 on Tainan county road No 174 as an example, it located at Zeng-Wun river watershed in the southern Taiwan, is an active landslide due to typhoon events. Coordinates for the case study site are 194925, 2567208 (TWD97). The site was selected as the results of previous reports and geological survey. According to the Central Weather Bureau, the annual precipitation is about 2,450 mm, the highest monthly value is in August with 630 mm, and the lowest value is in November with 13 mm. The results show that the critical rainfall threshold of the study case is around 640 mm. It means that there should be alarmed when the accumulated rainfall over 640 mm. Our preliminary results appear to be useful for rainfall-induced landslide hazard assessments. The findings are also a good reference to establish an early warning system of landslides and develop strategies to prevent so much misfortune from happening in the future.

  18. Removal Rate Model for Magnetorheological Finishing of Glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeGroote, J.E.; Marino, A.E.; WIlson, J.P.

    2007-11-14

    Magnetorheological finishing (MRF) is a deterministic subaperture polishing process. The process uses a magntorheological (MR) fluid that consists of micrometer-sized, spherical, magnetic carbonyl iron (CI) particles, nonmagnetic polishing abrasives, water, and stabilizers. Material removal occurs when the CI and nonmagnetic polishing abrasives shear material off the surface being polished. We introduce a new MRF material removal rate model for glass. This model contains terms for the near surface mechanical properties of glass, drag force, polishing abrasive size and concentration, chemical durability of the glass, MR fluid pH, and the glass composition. We introduce quantitative chemical predictors for the first time,more » to the best of our knowledge, into an MRF removal rate model. We validate individual terms in our model separately and then combine all of the terms to show the whole MRF material removal model compared with experimental data. All of our experimental data were obtained using nanodiamond MR fluids and a set of six optical glasses.« less

  19. Vibration study of a vehicle suspension assembly with the finite element method

    NASA Astrophysics Data System (ADS)

    Cătălin Marinescu, Gabriel; Castravete, Ştefan-Cristian; Dumitru, Nicolae

    2017-10-01

    The main steps of the present work represent a methodology of analysing various vibration effects over suspension mechanical parts of a vehicle. A McPherson type suspension from an existing vehicle was created using CAD software. Using the CAD model as input, a finite element model of the suspension assembly was developed. Abaqus finite element analysis software was used to pre-process, solve, and post-process the results. Geometric nonlinearities are included in the model. Severe sources of nonlinearities such us friction and contact are also included in the model. The McPherson spring is modelled as linear spring. The analysis include several steps: preload, modal analysis, the reduction of the model to 200 generalized coordinates, a deterministic external excitation, a random excitation that comes from different types of roads. The vibration data used as an input for the simulation were previously obtained by experimental means. Mathematical expressions used for the simulation were also presented in the paper.

  20. DIETARY EXPOSURES OF YOUNG CHILDREN, PART 3: MODELLING

    EPA Science Inventory

    A deterministic model was used to model dietary exposure of young children. Parameters included pesticide residue on food before handling, surface pesticide loading, transfer efficiencies and children's activity patterns. Three components of dietary pesticide exposure were includ...

  1. Mesoscopic and continuum modelling of angiogenesis

    PubMed Central

    Spill, F.; Guerrero, P.; Alarcon, T.; Maini, P. K.; Byrne, H. M.

    2016-01-01

    Angiogenesis is the formation of new blood vessels from pre-existing ones in response to chemical signals secreted by, for example, a wound or a tumour. In this paper, we propose a mesoscopic lattice-based model of angiogenesis, in which processes that include proliferation and cell movement are considered as stochastic events. By studying the dependence of the model on the lattice spacing and the number of cells involved, we are able to derive the deterministic continuum limit of our equations and compare it to similar existing models of angiogenesis. We further identify conditions under which the use of continuum models is justified, and others for which stochastic or discrete effects dominate. We also compare different stochastic models for the movement of endothelial tip cells which have the same macroscopic, deterministic behaviour, but lead to markedly different behaviour in terms of production of new vessel cells. PMID:24615007

  2. Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics

    NASA Astrophysics Data System (ADS)

    Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol

    2015-09-01

    The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.

  3. Coexistence and chaos in complex ecologies [rapid communication

    NASA Astrophysics Data System (ADS)

    Sprott, J. C.; Vano, J. A.; Wildenberg, J. C.; Anderson, M. B.; Noel, J. K.

    2005-02-01

    Many complex dynamical systems in ecology, economics, neurology, and elsewhere, in which agents compete for limited resources, exhibit apparently chaotic fluctuations. This Letter proposes a purely deterministic mechanism for evolving robustly but weakly chaotic systems that exhibit adaptation, self-organization, sporadic volatility, and punctuated equilibria.

  4. Soft network composite materials with deterministic and bio-inspired designs

    PubMed Central

    Jang, Kyung-In; Chung, Ha Uk; Xu, Sheng; Lee, Chi Hwan; Luan, Haiwen; Jeong, Jaewoong; Cheng, Huanyu; Kim, Gwang-Tae; Han, Sang Youn; Lee, Jung Woo; Kim, Jeonghyun; Cho, Moongee; Miao, Fuxing; Yang, Yiyuan; Jung, Han Na; Flavin, Matthew; Liu, Howard; Kong, Gil Woo; Yu, Ki Jun; Rhee, Sang Il; Chung, Jeahoon; Kim, Byunggik; Kwak, Jean Won; Yun, Myoung Hee; Kim, Jin Young; Song, Young Min; Paik, Ungyu; Zhang, Yihui; Huang, Yonggang; Rogers, John A.

    2015-01-01

    Hard and soft structural composites found in biology provide inspiration for the design of advanced synthetic materials. Many examples of bio-inspired hard materials can be found in the literature; far less attention has been devoted to soft systems. Here we introduce deterministic routes to low-modulus thin film materials with stress/strain responses that can be tailored precisely to match the non-linear properties of biological tissues, with application opportunities that range from soft biomedical devices to constructs for tissue engineering. The approach combines a low-modulus matrix with an open, stretchable network as a structural reinforcement that can yield classes of composites with a wide range of desired mechanical responses, including anisotropic, spatially heterogeneous, hierarchical and self-similar designs. Demonstrative application examples in thin, skin-mounted electrophysiological sensors with mechanics precisely matched to the human epidermis and in soft, hydrogel-based vehicles for triggered drug release suggest their broad potential uses in biomedical devices. PMID:25782446

  5. Emergent evolutionism, determinism and unpredictability.

    PubMed

    Sartenaer, Olivier

    2015-06-01

    The fact that there exist in nature thoroughly deterministic systems whose future behavior cannot be predicted, no matter how advanced or fined-tune our cognitive and technical abilities turn out to be, has been well established over the last decades or so, essentially in the light of two different theoretical frameworks, namely chaos theory and (some deterministic interpretation of) quantum mechanics. The prime objective of this paper is to show that there actually exists an alternative strategy to ground the divorce between determinism and predictability, a way that is older than-and conceptually independent from-chaos theory and quantum mechanics, and which has not received much attention in the recent philosophical literature about determinism. This forgotten strategy-embedded in the doctrine called "emergent evolutionism"-is nonetheless far from being a mere historical curiosity that should only draw the attention of philosophers out of their concern for comprehensiveness. It has been indeed recently revived in the works of respected scientists. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The divine clockwork: Bohr's correspondence principle and Nelson's stochastic mechanics for the atomic elliptic state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durran, Richard; Neate, Andrew; Truman, Aubrey

    2008-03-15

    We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.

  7. Mechanisms that enhance sustainability of p53 pulses.

    PubMed

    Kim, Jae Kyoung; Jackson, Trachette L

    2013-01-01

    The tumor suppressor p53 protein shows various dynamic responses depending on the types and extent of cellular stresses. In particular, in response to DNA damage induced by γ-irradiation, cells generate a series of p53 pulses. Recent research has shown the importance of sustaining repeated p53 pulses for recovery from DNA damage. However, far too little attention has been paid to understanding how cells can sustain p53 pulses given the complexities of genetic heterogeneity and intrinsic noise. Here, we explore potential molecular mechanisms that enhance the sustainability of p53 pulses by developing a new mathematical model of the p53 regulatory system. This model can reproduce many experimental results that describe the dynamics of p53 pulses. By simulating the model both deterministically and stochastically, we found three potential mechanisms that improve the sustainability of p53 pulses: 1) the recently identified positive feedback loop between p53 and Rorα allows cells to sustain p53 pulses with high amplitude over a wide range of conditions, 2) intrinsic noise can often prevent the dampening of p53 pulses even after mutations, and 3) coupling of p53 pulses in neighboring cells via cytochrome-c significantly reduces the chance of failure in sustaining p53 pulses in the presence of heterogeneity among cells. Finally, in light of these results, we propose testable experiments that can reveal important mechanisms underlying p53 dynamics.

  8. Location of coating defects and assessment of level of cathodic protection on underground pipelines using AC impedance, deterministic and non-deterministic models

    NASA Astrophysics Data System (ADS)

    Castaneda-Lopez, Homero

    A methodology for detecting and locating defects or discontinuities on the outside covering of coated metal underground pipelines subjected to cathodic protection has been addressed. On the basis of wide range AC impedance signals for various frequencies applied to a steel-coated pipeline system and by measuring its corresponding transfer function under several laboratory simulation scenarios, a physical laboratory setup of an underground cathodic-protected, coated pipeline was built. This model included different variables and elements that exist under real conditions, such as soil resistivity, soil chemical composition, defect (holiday) location in the pipeline covering, defect area and geometry, and level of cathodic protection. The AC impedance data obtained under different working conditions were used to fit an electrical transmission line model. This model was then used as a tool to fit the impedance signal for different experimental conditions and to establish trends in the impedance behavior without the necessity of further experimental work. However, due to the chaotic nature of the transfer function response of this system under several conditions, it is believed that non-deterministic models based on pattern recognition algorithms are suitable for field condition analysis. A non-deterministic approach was used for experimental analysis by applying an artificial neural network (ANN) algorithm based on classification analysis capable of studying the pipeline system and differentiating the variables that can change impedance conditions. These variables include level of cathodic protection, location of discontinuities (holidays), and severity of corrosion. This work demonstrated a proof-of-concept for a well-known technique and a novel algorithm capable of classifying impedance data for experimental results to predict the exact location of the active holidays and defects on the buried pipelines. Laboratory findings from this procedure are promising, and efforts to develop it for field conditions should continue.

  9. Simple deterministic models and applications. Comment on "Coupled disease-behavior dynamics on complex networks: A review" by Z. Wang et al.

    NASA Astrophysics Data System (ADS)

    Yang, Hyun Mo

    2015-12-01

    Currently, discrete modellings are largely accepted due to the access to computers with huge storage capacity and high performance processors and easy implementation of algorithms, allowing to develop and simulate increasingly sophisticated models. Wang et al. [7] present a review of dynamics in complex networks, focusing on the interaction between disease dynamics and human behavioral and social dynamics. By doing an extensive review regarding to the human behavior responding to disease dynamics, the authors briefly describe the complex dynamics found in the literature: well-mixed populations networks, where spatial structure can be neglected, and other networks considering heterogeneity on spatially distributed populations. As controlling mechanisms are implemented, such as social distancing due 'social contagion', quarantine, non-pharmaceutical interventions and vaccination, adaptive behavior can occur in human population, which can be easily taken into account in the dynamics formulated by networked populations.

  10. NasoNet, modeling the spread of nasopharyngeal cancer with networks of probabilistic events in discrete time.

    PubMed

    Galán, S F; Aguado, F; Díez, F J; Mira, J

    2002-07-01

    The spread of cancer is a non-deterministic dynamic process. As a consequence, the design of an assistant system for the diagnosis and prognosis of the extent of a cancer should be based on a representation method that deals with both uncertainty and time. The ultimate goal is to know the stage of development of a cancer in a patient before selecting the appropriate treatment. A network of probabilistic events in discrete time (NPEDT) is a type of Bayesian network for temporal reasoning that models the causal mechanisms associated with the time evolution of a process. This paper describes NasoNet, a system that applies NPEDTs to the diagnosis and prognosis of nasopharyngeal cancer. We have made use of temporal noisy gates to model the dynamic causal interactions that take place in the domain. The methodology we describe is general enough to be applied to any other type of cancer.

  11. Optimizing Chemical Reactions with Deep Reinforcement Learning

    PubMed Central

    2017-01-01

    Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability. PMID:29296675

  12. Hidden order in crackling noise during peeling of an adhesive tape.

    PubMed

    Kumar, Jagadish; Ciccotti, M; Ananthakrishna, G

    2008-04-01

    We address the longstanding problem of recovering dynamical information from noisy acoustic emission signals arising from peeling of an adhesive tape subject to constant traction velocity. Using the phase space reconstruction procedure we demonstrate the deterministic chaotic dynamics by establishing the existence of correlation dimension as also a positive Lyapunov exponent in a midrange of traction velocities. The results are explained on the basis of the model that also emphasizes the deterministic origin of acoustic emission by clarifying its connection to stick-slip dynamics.

  13. On the design of innovative heterogeneous tests using a shape optimization approach

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    The development of full-field measurement methods enabled a new trend of mechanical tests. By providing the inhomogeneous strain field from the tests, these techniques are being widely used in sheet metal identification strategies, through heterogeneous mechanical tests. In this work, a heterogeneous mechanical test with an innovative tool/specimen shape, capable of producing rich heterogeneous strain paths providing extensive information on material behavior, is aimed. The specimen is found using a shape optimization process where a dedicated indicator that evaluates the richness of strain information is used. The methodology and results here presented are extended to non-specimen geometry dependence and to the non-dependence of the geometry parametrization through the use of the Ritz method for boundary value problems. Different curve models, such as Splines, B-Splines and NURBS, are used and C1 continuity throughout the specimen is guaranteed. Moreover, various optimization methods are used, deterministic and stochastic, in order to find the method or a combination of methods able to effectively minimize the cost function.

  14. Mathematical models of behavior of individual animals.

    PubMed

    Tsibulsky, Vladimir L; Norman, Andrew B

    2007-01-01

    This review is focused on mathematical modeling of behaviors of a whole organism with special emphasis on models with a clearly scientific approach to the problem that helps to understand the mechanisms underlying behavior. The aim is to provide an overview of old and contemporary mathematical models without complex mathematical details. Only deterministic and stochastic, but not statistical models are reviewed. All mathematical models of behavior can be divided into two main classes. First, models that are based on the principle of teleological determinism assume that subjects choose the behavior that will lead them to a better payoff in the future. Examples are game theories and operant behavior models both of which are based on the matching law. The second class of models are based on the principle of causal determinism, which assume that subjects do not choose from a set of possibilities but rather are compelled to perform a predetermined behavior in response to specific stimuli. Examples are perception and discrimination models, drug effects models and individual-based population models. A brief overview of the utility of each mathematical model is provided for each section.

  15. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  16. Observables, measurements and phase operators from a Bohmian perspective

    NASA Technical Reports Server (NTRS)

    Daumer, Martin; Goldstein, Sheldon

    1993-01-01

    Bohmian mechanics is a deterministic theory of point particles in motion. While avoiding all the paradoxes of nonrelativistic quantum mechanics, it yields the quantum formalism itself--especially the role of self-adjoint operators--as a macroscopic measurement formalism. As an 'application' it is shown that much of the confusion connected with the phase operator for the electromagnetic field arises from a misunderstanding of the role of operators in quantum theory.

  17. Continuum models of cohesive stochastic swarms: The effect of motility on aggregation patterns

    NASA Astrophysics Data System (ADS)

    Hughes, Barry D.; Fellner, Klemens

    2013-10-01

    Mathematical models of swarms of moving agents with non-local interactions have many applications and have been the subject of considerable recent interest. For modest numbers of agents, cellular automata or related algorithms can be used to study such systems, but in the present work, instead of considering discrete agents, we discuss a class of one-dimensional continuum models, in which the agents possess a density ρ(x,t) at location x at time t. The agents are subject to a stochastic motility mechanism and to a global cohesive inter-agent force. The motility mechanisms covered include classical diffusion, nonlinear diffusion (which may be used to model, in a phenomenological way, volume exclusion or other short-range local interactions), and a family of linear redistribution operators related to fractional diffusion equations. A variety of exact analytic results are discussed, including equilibrium solutions and criteria for unimodality of equilibrium distributions, full time-dependent solutions, and transitions between asymptotic collapse and asymptotic escape. We address the behaviour of the system for diffusive motility in the low-diffusivity limit for both smooth and singular interaction potentials and show how this elucidates puzzling behaviour in fully deterministic non-local particle interaction models. We conclude with speculative remarks about extensions and applications of the models.

  18. Ten reasons why a thermalized system cannot be described by a many-particle wave function

    NASA Astrophysics Data System (ADS)

    Drossel, Barbara

    2017-05-01

    It is widely believed that the underlying reality behind statistical mechanics is a deterministic and unitary time evolution of a many-particle wave function, even though this is in conflict with the irreversible, stochastic nature of statistical mechanics. The usual attempts to resolve this conflict for instance by appealing to decoherence or eigenstate thermalization are riddled with problems. This paper considers theoretical physics of thermalized systems as it is done in practice and shows that all approaches to thermalized systems presuppose in some form limits to linear superposition and deterministic time evolution. These considerations include, among others, the classical limit, extensivity, the concepts of entropy and equilibrium, and symmetry breaking in phase transitions and quantum measurement. As a conclusion, the paper suggests that the irreversibility and stochasticity of statistical mechanics should be taken as a real property of nature. It follows that a gas of a macroscopic number N of atoms in thermal equilibrium is best represented by a collection of N wave packets of a size of the order of the thermal de Broglie wave length, which behave quantum mechanically below this scale but classically sufficiently far beyond this scale. In particular, these wave packets must localize again after scattering events, which requires stochasticity and indicates a connection to the measurement process.

  19. Selective control of multiple ferroelectric switching pathways using a trailing flexoelectric field

    NASA Astrophysics Data System (ADS)

    Park, Sung Min; Wang, Bo; Das, Saikat; Chae, Seung Chul; Chung, Jin-Seok; Yoon, Jong-Gul; Chen, Long-Qing; Yang, Sang Mo; Noh, Tae Won

    2018-05-01

    Flexoelectricity is an electromechanical coupling between electrical polarization and a strain gradient1 that enables mechanical manipulation of polarization without applying an electrical bias2,3. Recently, flexoelectricity was directly demonstrated by mechanically switching the out-of-plane polarization of a uniaxial system with a scanning probe microscope tip3,4. However, the successful application of flexoelectricity in low-symmetry multiaxial ferroelectrics and therefore active manipulation of multiple domains via flexoelectricity have not yet been achieved. Here, we demonstrate that the symmetry-breaking flexoelectricity offers a powerful route for the selective control of multiple domain switching pathways in multiaxial ferroelectric materials. Specifically, we use a trailing flexoelectric field that is created by the motion of a mechanically loaded scanning probe microscope tip. By controlling the SPM scan direction, we can deterministically select either stable 71° ferroelastic switching or 180° ferroelectric switching in a multiferroic magnetoelectric BiFeO3 thin film. Phase-field simulations reveal that the amplified in-plane trailing flexoelectric field is essential for this domain engineering. Moreover, we show that mechanically switched domains have a good retention property. This work opens a new avenue for the deterministic selection of nanoscale ferroelectric domains in low-symmetry materials for non-volatile magnetoelectric devices and multilevel data storage.

  20. Finally making sense of the double-slit experiment.

    PubMed

    Aharonov, Yakir; Cohen, Eliahu; Colombo, Fabrizio; Landsberger, Tomer; Sabadini, Irene; Struppa, Daniele C; Tollaksen, Jeff

    2017-06-20

    Feynman stated that the double-slit experiment "…has in it the heart of quantum mechanics. In reality, it contains the only mystery" and that "nobody can give you a deeper explanation of this phenomenon than I have given; that is, a description of it" [Feynman R, Leighton R, Sands M (1965) The Feynman Lectures on Physics ]. We rise to the challenge with an alternative to the wave function-centered interpretations: instead of a quantum wave passing through both slits, we have a localized particle with nonlocal interactions with the other slit. Key to this explanation is dynamical nonlocality, which naturally appears in the Heisenberg picture as nonlocal equations of motion. This insight led us to develop an approach to quantum mechanics which relies on pre- and postselection, weak measurements, deterministic, and modular variables. We consider those properties of a single particle that are deterministic to be primal. The Heisenberg picture allows us to specify the most complete enumeration of such deterministic properties in contrast to the Schrödinger wave function, which remains an ensemble property. We exercise this approach by analyzing a version of the double-slit experiment augmented with postselection, showing that only it and not the wave function approach can be accommodated within a time-symmetric interpretation, where interference appears even when the particle is localized. Although the Heisenberg and Schrödinger pictures are equivalent formulations, nevertheless, the framework presented here has led to insights, intuitions, and experiments that were missed from the old perspective.

  1. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; MacLaurin, James

    2018-06-01

    Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .

  2. Effects of Noise on Ecological Invasion Processes: Bacteriophage-mediated Competition in Bacteria

    NASA Astrophysics Data System (ADS)

    Joo, Jaewook; Eric, Harvill; Albert, Reka

    2007-03-01

    Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.

  3. Analysis of stochastic model for non-linear volcanic dynamics

    NASA Astrophysics Data System (ADS)

    Alexandrov, D.; Bashkirtseva, I.; Ryashko, L.

    2014-12-01

    Motivated by important geophysical applications we consider a dynamic model of the magma-plug system previously derived by Iverson et al. (2006) under the influence of stochastic forcing. Due to strong nonlinearity of the friction force for solid plug along its margins, the initial deterministic system exhibits impulsive oscillations. Two types of dynamic behavior of the system under the influence of the parametric stochastic forcing have been found: random trajectories are scattered on both sides of the deterministic cycle or grouped on its internal side only. It is shown that dispersions are highly inhomogeneous along cycles in the presence of noises. The effects of noise-induced shifts, pressure stabilization and localization of random trajectories have been revealed with increasing the noise intensity. The plug velocity, pressure and displacement are highly dependent of noise intensity as well. These new stochastic phenomena are related with the nonlinear peculiarities of the deterministic phase portrait. It is demonstrated that the repetitive stick-slip motions of the magma-plug system in the case of stochastic forcing can be connected with drumbeat earthquakes.

  4. Modelling the interaction between flooding events and economic growth

    NASA Astrophysics Data System (ADS)

    Grames, J.; Prskawetz, A.; Grass, D.; Blöschl, G.

    2015-06-01

    Socio-hydrology describes the interaction between the socio-economy and water. Recent models analyze the interplay of community risk-coping culture, flooding damage and economic growth (Di Baldassarre et al., 2013; Viglione et al., 2014). These models descriptively explain the feedbacks between socio-economic development and natural disasters like floods. Contrary to these descriptive models, our approach develops an optimization model, where the intertemporal decision of an economic agent interacts with the hydrological system. In order to build this first economic growth model describing the interaction between the consumption and investment decisions of an economic agent and the occurrence of flooding events, we transform an existing descriptive stochastic model into an optimal deterministic model. The intermediate step is to formulate and simulate a descriptive deterministic model. We develop a periodic water function to approximate the former discrete stochastic time series of rainfall events. Due to the non-autonomous exogenous periodic rainfall function the long-term path of consumption and investment will be periodic.

  5. Deterministic Approach to the Kinetic Theory of Gases

    NASA Astrophysics Data System (ADS)

    Beck, József

    2010-02-01

    In the so-called Bernoulli model of the kinetic theory of gases, where (1) the particles are dimensionless points, (2) they are contained in a cube container, (3) no attractive or exterior forces are acting on them, (4) there is no collision between the particles, (5) the collision against the walls of the container are according to the law of elastic reflection, we deduce from Newtonian mechanics two local probabilistic laws: a Poisson limit law and a central limit theorem. We also prove some global law of large numbers, justifying that "density" and "pressure" are constant. Finally, as a byproduct of our research, we prove the surprising super-uniformity of the typical billiard path in a square.

  6. Acoustic Emission from Breaking a Bamboo Chopstick

    NASA Astrophysics Data System (ADS)

    Tsai, Sun-Ting; Wang, Li-Min; Huang, Panpan; Yang, Zhengning; Chang, Chin-De; Hong, Tzay-Ming

    2016-01-01

    The acoustic emission from breaking a bamboo chopstick or a bundle of spaghetti is found to exhibit similar behavior as the famous seismic laws of Gutenberg and Richter, Omori, and Båth. By the use of a force-sensing detector, we establish a positive correlation between the statistics of sound intensity and the magnitude of a tremor. We also manage to derive these laws analytically without invoking the concept of a phase transition, self-organized criticality, or fractal. Our model is deterministic and relies on the existence of a structured cross section, either fibrous or layered. This success at explaining the power-law behavior supports the proposal that geometry is sometimes more important than mechanics.

  7. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  8. MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.

    PubMed

    Lok, Judith J

    2017-04-01

    In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.

  9. A New View of Radiation-Induced Cancer: Integrating Short-and Long-Term Processes. Part I: Approach

    NASA Technical Reports Server (NTRS)

    Shuryak, Igor; Hahnfeldt, Philip; Hlatky, Lynn; Sachs, Rainer K.; Brenner, David J.

    2009-01-01

    Mathematical models of radiation carcinogenesis are important for understanding mechanisms and for interpreting or extrapolating risk. There are two classes of such models: (1) long-term formalisms that track premalignant cell numbers throughout an entire lifetime but treat initial radiation dose-response simplistically and (2) short-term formalisms that provide a detailed initial dose-response even for complicated radiation protocols, but address its modulation during the subsequent cancer latency period only indirectly. We argue that integrating short- and long-term models is needed. As an example of this novel approach, we integrate a stochastic short-term initiation/ inactivation/repopulation model with a deterministic two-stage long-term model. Within this new formalism, the following assumptions are implemented: radiation initiates, promotes, or kills pre-malignant cells; a pre-malignant cell generates a clone, which, if it survives, quickly reaches a size limitation; the clone subsequently grows more slowly and can eventually generate a malignant cell; the carcinogenic potential of pre-malignant cells decreases with age.

  10. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  11. Speciation in birds: genes, geography, and sexual selection.

    PubMed

    Edwards, Scott V; Kingan, Sarah B; Calkins, Jennifer D; Balakrishnan, Christopher N; Jennings, W Bryan; Swanson, Willie J; Sorenson, Michael D

    2005-05-03

    Molecular studies of speciation in birds over the last three decades have been dominated by a focus on the geography, ecology, and timing of speciation, a tradition traceable to Mayr's Systematics and the Origin of Species. However, in the recent years, interest in the behavioral and molecular mechanisms of speciation in birds has increased, building in part on the older traditions and observations from domesticated species. The result is that many of the same mechanisms proffered for model lineages such as Drosophila--mechanisms such as genetic incompatibilities, reinforcement, and sexual selection--are now being seriously entertained for birds, albeit with much lower resolution. The recent completion of a draft sequence of the chicken genome, and an abundance of single-nucleotide polymorphisms on the autosomes and sex chromosomes, will dramatically accelerate research on the molecular mechanisms of avian speciation over the next few years. The challenge for ornithologists is now to inform well studied examples of speciation in nature with increased molecular resolution-to clone speciation genes if they exist--and thereby evaluate the relative roles of extrinsic, intrinsic, deterministic, and stochastic causes for avian diversification.

  12. Amplification of intrinsic fluctuations by the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Fox, Ronald F.; Elston, T. C.

    1993-07-01

    Macroscopic systems (e.g., hydrodynamics, chemical reactions, electrical circuits, etc.) manifest intrinsic fluctuations of molecular and thermal origin. When the macroscopic dynamics is deterministically chaotic, the intrinsic fluctuations may become amplified by several orders of magnitude. Numerical studies of this phenomenon are presented in detail for the Lorenz model. Amplification to macroscopic scales is exhibited, and quantitative methods (binning and a difference-norm) are presented for measuring macroscopically subliminal amplification effects. In order to test the quality of the numerical results, noise induced chaos is studied around a deterministically nonchaotic state, where the scaling law relating the Lyapunov exponent to noise strength obtained for maps is confirmed for the Lorenz model, a system of ordinary differential equations.

  13. Uniqueness of Nash equilibrium in vaccination games.

    PubMed

    Bai, Fan

    2016-12-01

    One crucial condition for the uniqueness of Nash equilibrium set in vaccination games is that the attack ratio monotonically decreases as the vaccine coverage level increasing. We consider several deterministic vaccination models in homogeneous mixing population and in heterogeneous mixing population. Based on the final size relations obtained from the deterministic epidemic models, we prove that the attack ratios can be expressed in terms of the vaccine coverage levels, and also prove that the attack ratios are decreasing functions of vaccine coverage levels. Some thresholds are presented, which depend on the vaccine efficacy. It is proved that for vaccination games in homogeneous mixing population, there is a unique Nash equilibrium for each game.

  14. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  15. An in-situ stimulation experiment in crystalline rock - assessment of induced seismicity levels during stimulation and related hazard for nearby infrastructure

    NASA Astrophysics Data System (ADS)

    Gischig, Valentin; Broccardo, Marco; Amann, Florian; Jalali, Mohammadreza; Esposito, Simona; Krietsch, Hannes; Doetsch, Joseph; Madonna, Claudio; Wiemer, Stefan; Loew, Simon; Giardini, Domenico

    2016-04-01

    A decameter in-situ stimulation experiment is currently being performed at the Grimsel Test Site in Switzerland by the Swiss Competence Center for Energy Research - Supply of Electricity (SCCER-SoE). The underground research laboratory lies in crystalline rock at a depth of 480 m, and exhibits well-documented geology that is presenting some analogies with the crystalline basement targeted for the exploitation of deep geothermal energy resources in Switzerland. The goal is to perform a series of stimulation experiments spanning from hydraulic fracturing to controlled fault-slip experiments in an experimental volume approximately 30 m in diameter. The experiments will contribute to a better understanding of hydro-mechanical phenomena and induced seismicity associated with high-pressure fluid injections. Comprehensive monitoring during stimulation will include observation of injection rate and pressure, pressure propagation in the reservoir, permeability enhancement, 3D dislocation along the faults, rock mass deformation near the fault zone, as well as micro-seismicity. The experimental volume is surrounded by other in-situ experiments (at 50 to 500 m distance) and by infrastructure of the local hydropower company (at ~100 m to several kilometres distance). Although it is generally agreed among stakeholders related to the experiments that levels of induced seismicity may be low given the small total injection volumes of less than 1 m3, detailed analysis of the potential impact of the stimulation on other experiments and surrounding infrastructure is essential to ensure operational safety. In this contribution, we present a procedure how induced seismic hazard can be estimated for an experimental situation that is untypical for injection-induced seismicity in terms of injection volumes, injection depths and proximity to affected objects. Both, deterministic and probabilistic methods are employed to estimate that maximum possible and the maximum expected induced earthquake magnitude. Deterministic methods are based on McGarr's upper limit for the maximum induced seismic moment. Probabilistic methods rely on estimates of Shapiro's seismogenic index and seismicity rates from past stimulation experiments that are scaled to injection volumes of interest. Using rate-and-state frictional modelling coupled to a hydro-mechanical fracture flow model, we demonstrate that large uncontrolled rupture events are unlikely to occur and that deterministic upper limits may be sufficiently conservative. The proposed workflow can be applied to similar injection experiments, for which hazard to nearby infrastructure may limit experimental design.

  16. INTEGRATED PLANNING MODEL - EPA APPLICATIONS

    EPA Science Inventory

    The Integrated Planning Model (IPM) is a multi-regional, dynamic, deterministic linear programming (LP) model of the electric power sector in the continental lower 48 states and the District of Columbia. It provides forecasts up to year 2050 of least-cost capacity expansion, elec...

  17. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  18. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks.

    PubMed

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.

  19. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks

    PubMed Central

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110

  20. Patterns of taxonomic, phylogenetic diversity during a long-term succession of forest on the Loess Plateau, China: insights into assembly process

    PubMed Central

    Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang

    2016-01-01

    Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient. PMID:27272407

  1. Patterns of taxonomic, phylogenetic diversity during a long-term succession of forest on the Loess Plateau, China: insights into assembly process.

    PubMed

    Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang

    2016-06-08

    Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient.

  2. The forecasting research of early warning systems for atmospheric pollutants: A case in Yangtze River Delta region

    NASA Astrophysics Data System (ADS)

    Song, Yiliao; Qin, Shanshan; Qu, Jiansheng; Liu, Feng

    2015-10-01

    The issue of air quality regarding PM pollution levels in China is a focus of public attention. To address that issue, to date, a series of studies is in progress, including PM monitoring programs, PM source apportionment, and the enactment of new ambient air quality index standards. However, related research concerning computer modeling for PM future trends estimation is rare, despite its significance to forecasting and early warning systems. Thereby, a study regarding deterministic and interval forecasts of PM is performed. In this study, data on hourly and 12 h-averaged air pollutants are applied to forecast PM concentrations within the Yangtze River Delta (YRD) region of China. The characteristics of PM emissions have been primarily examined and analyzed using different distribution functions. To improve the distribution fitting that is crucial for estimating PM levels, an artificial intelligence algorithm is incorporated to select the optimal parameters. Following that step, an ANF model is used to conduct deterministic forecasts of PM. With the identified distributions and deterministic forecasts, different levels of PM intervals are estimated. The results indicate that the lognormal or gamma distributions are highly representative of the recorded PM data with a goodness-of-fit R2 of approximately 0.998. Furthermore, the results of the evaluation metrics (MSE, MAPE and CP, AW) also show high accuracy within the deterministic and interval forecasts of PM, indicating that this method enables the informative and effective quantification of future PM trends.

  3. A physically based model of global freshwater surface temperature

    NASA Astrophysics Data System (ADS)

    Beek, Ludovicus P. H.; Eikelboom, Tessa; Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for the Arctic rivers because the timing of ice breakup is predicted too late in the year due to the lack of including a mechanical breakup mechanism. Moreover, surface water temperatures for tropical rivers were overestimated, most likely due to an overestimation of rainfall temperature and incoming shortwave radiation. The spatiotemporal variation of water temperature reveals large temperature differences between water and atmosphere for the higher latitudes, while considerable lateral transport of heat can be observed for rivers crossing hydroclimatic zones, such as the Nile, the Mississippi, and the large rivers flowing to the Arctic. Overall, our model results show promise for future projection of global surface freshwater temperature under global change.

  4. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USDA-ARS?s Scientific Manuscript database

    The predictions of most terrestrial ecosystem models originate from deterministic simulations. Relatively few uncertainty evaluation exercises in model outputs are performed by either model developers or users. This issue has important consequences for decision makers who rely on models to develop n...

  5. A random walk on water (Henry Darcy Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2009-04-01

    Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?

  6. Immersion freezing of internally and externally mixed mineral dust species analyzed by stochastic and deterministic models

    NASA Astrophysics Data System (ADS)

    Wong, B.; Kilthau, W.; Knopf, D. A.

    2017-12-01

    Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (J­­het), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.

  7. Transient deterministic shallow landslide modeling: Requirements for susceptibility and hazard assessments in a GIS framework

    USGS Publications Warehouse

    Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.

    2008-01-01

    Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.

  8. Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2018-03-01

    The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.

  9. Deterministic ion beam material adding technology for high-precision optical surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  10. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  11. Robust Planning for Effects-Based Operations

    DTIC Science & Technology

    2006-06-01

    Algorithm ......................................... 34 2.6 Robust Optimization Literature ..................................... 36 2.6.1 Protecting Against...Model Formulation ...................... 55 3.1.5 Deterministic EBO Model Example and Performance ............. 59 3.1.6 Greedy Algorithm ...111 4.1.9 Conclusions on Robust EBO Model Performance .................... 116 4.2 Greedy Algorithm versus EBO Models

  12. Application of Wavelet Filters in an Evaluation of Photochemical Model Performance

    EPA Science Inventory

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...

  13. FACTORS INFLUENCING TOTAL DIETARY EXPOSURES OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify the critical input parameters needed to assess dietary intakes of young children. The model was used as a framework for understanding the important factors in data collection and data analysis. Factors incorporated into the model i...

  14. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  15. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  16. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  17. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  18. Relation Between the Cell Volume and the Cell Cycle Dynamics in Mammalian cell

    NASA Astrophysics Data System (ADS)

    Magno, A. C. G.; Oliveira, I. L.; Hauck, J. V. S.

    2016-08-01

    The main goal of this work is to add and analyze an equation that represents the volume in a dynamical model of the mammalian cell cycle proposed by Gérard and Goldbeter (2011) [1]. The cell division occurs when the cyclinB/Cdkl complex is totally degraded (Tyson and Novak, 2011)[2] and it reaches a minimum value. At this point, the cell is divided into two newborn daughter cells and each one will contain the half of the cytoplasmic content of the mother cell. The equations of our base model are only valid if the cell volume, where the reactions occur, is constant. Whether the cell volume is not constant, that is, the rate of change of its volume with respect to time is explicitly taken into account in the mathematical model, then the equations of the original model are no longer valid. Therefore, every equations were modified from the mass conservation principle for considering a volume that changes with time. Through this approach, the cell volume affects all model variables. Two different dynamic simulation methods were accomplished: deterministic and stochastic. In the stochastic simulation, the volume affects every model's parameters which have molar unit, whereas in the deterministic one, it is incorporated into the differential equations. In deterministic simulation, the biochemical species may be in concentration units, while in stochastic simulation such species must be converted to number of molecules which are directly proportional to the cell volume. In an effort to understand the influence of the new equation a stability analysis was performed. This elucidates how the growth factor impacts the stability of the model's limit cycles. In conclusion, a more precise model, in comparison to the base model, was created for the cell cycle as it now takes into consideration the cell volume variation

  19. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  20. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  1. Resilience of riverbed vegetation to uprooting by flow

    NASA Astrophysics Data System (ADS)

    Perona, P.; Crouzy, B.

    2018-03-01

    Riverine ecosystem biodiversity is largely maintained by ecogeomorphic processes including vegetation renewal via uprooting and recovery times to flow disturbances. Plant roots thus heavily contribute to engineering resilience to perturbation of such ecosystems. We show that vegetation uprooting by flow occurs as a fatigue-like mechanism, which statistically requires a given exposure time to imposed riverbed flow erosion rates before the plant collapses. We formulate a physically based stochastic model for the actual plant rooting depth and the time-to-uprooting, which allows us to define plant resilience to uprooting for generic time-dependent flow erosion dynamics. This theory shows that plant resilience to uprooting depends on the time-to-uprooting and that root mechanical anchoring acts as a process memory stored within the plant-soil system. The model is validated against measured data of time-to-uprooting of Avena sativa seedlings with various root lengths under different flow conditions. This allows for assessing the natural variance of the uprooting-by-flow process and to compute the prediction entropy, which quantifies the relative importance of the deterministic and the random components affecting the process.

  2. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  3. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.

  4. A stochastic multicellular model identifies biological watermarks from disorders in self-organized patterns of phyllotaxis

    PubMed Central

    Refahi, Yassin; Brunoud, Géraldine; Farcot, Etienne; Jean-Marie, Alain; Pulkkinen, Minna; Vernoux, Teva; Godin, Christophe

    2016-01-01

    Exploration of developmental mechanisms classically relies on analysis of pattern regularities. Whether disorders induced by biological noise may carry information on building principles of developmental systems is an important debated question. Here, we addressed theoretically this question using phyllotaxis, the geometric arrangement of plant aerial organs, as a model system. Phyllotaxis arises from reiterative organogenesis driven by lateral inhibitions at the shoot apex. Motivated by recurrent observations of disorders in phyllotaxis patterns, we revisited in depth the classical deterministic view of phyllotaxis. We developed a stochastic model of primordia initiation at the shoot apex, integrating locality and stochasticity in the patterning system. This stochastic model recapitulates phyllotactic patterns, both regular and irregular, and makes quantitative predictions on the nature of disorders arising from noise. We further show that disorders in phyllotaxis instruct us on the parameters governing phyllotaxis dynamics, thus that disorders can reveal biological watermarks of developmental systems. DOI: http://dx.doi.org/10.7554/eLife.14093.001 PMID:27380805

  5. Designer's unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.

    1992-01-01

    A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  6. Langevin approach to a chemical wave front: Selection of the propagation velocity in the presence of internal noise

    NASA Astrophysics Data System (ADS)

    Lemarchand, A.; Lesne, A.; Mareschal, M.

    1995-05-01

    The reaction-diffusion equation associated with the Fisher chemical model A+B-->2A admits wave-front solutions by replacing an unstable stationary state with a stable one. The deterministic analysis concludes that their propagation velocity is not prescribed by the dynamics. For a large class of initial conditions the velocity which is spontaneously selected is equal to the minimum allowed velocity vmin, as predicted by the marginal stability criterion. In order to test the relevance of this deterministic description we investigate the macroscopic consequences, on the velocity and the width of the front, of the intrinsic stochasticity due to the underlying microscopic dynamics. We solve numerically the Langevin equations, deduced analytically from the master equation within a system size expansion procedure. We show that the mean profile associated with the stochastic solution propagates faster than the deterministic solution at a velocity up to 25% greater than vmin.

  7. Optimal design for robust control of uncertain flexible joint manipulators: a fuzzy dynamical system approach

    NASA Astrophysics Data System (ADS)

    Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang

    2018-04-01

    A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.

  8. ASSESSMENT OF TWO PHYSICALLY BASED WATERSHED MODELS BASED ON THEIR PERFORMANCES OF SIMULATING SEDIMENT MOVEMENT OVER SMALL WATERSHEDS

    EPA Science Inventory


    Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...

  9. Identifiability Of Systems With Modeling Errors

    NASA Technical Reports Server (NTRS)

    Hadaegh, Yadolah " fred" ; Bekey, George A.

    1988-01-01

    Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.

  10. PARADIGM USING JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY STOCHASTIC DESCRIPTION AS A TEMPLATE FOR MODEL EVALUATION

    EPA Science Inventory

    The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...

  11. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  12. Understanding Rasch Measurement: Rasch Models Overview.

    ERIC Educational Resources Information Center

    Wright, Benjamin D.; Mok, Magdalena

    2000-01-01

    Presents an overview of Rasch measurement models that begins with a conceptualization of continuous experiences often captured as discrete observations. Discusses the mathematical properties of the Rasch family of models that allow the transformation of discrete deterministic counts into continuous probabilistic abstractions. Also discusses six of…

  13. Adequacy assessment of composite generation and transmission systems incorporating wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Gao, Yi

    The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.

  14. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis

    USGS Publications Warehouse

    Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.

    2009-01-01

    An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.

  15. A new approach to fracture modelling in reservoirs using deterministic, genetic and statistical models of fracture growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawnsley, K.; Swaby, P.

    1996-08-01

    It is increasingly acknowledged that in order to understand and forecast the behavior of fracture influenced reservoirs we must attempt to reproduce the fracture system geometry and use this as a basis for fluid flow calculation. This article aims to present a recently developed fracture modelling prototype designed specifically for use in hydrocarbon reservoir environments. The prototype {open_quotes}FRAME{close_quotes} (FRActure Modelling Environment) aims to provide a tool which will allow the generation of realistic 3D fracture systems within a reservoir model, constrained to the known geology of the reservoir by both mechanical and statistical considerations, and which can be used asmore » a basis for fluid flow calculation. Two newly developed modelling techniques are used. The first is an interactive tool which allows complex fault surfaces and their associated deformations to be reproduced. The second is a {open_quotes}genetic{close_quotes} model which grows fracture patterns from seeds using conceptual models of fracture development. The user defines the mechanical input and can retrieve all the statistics of the growing fractures to allow comparison to assumed statistical distributions for the reservoir fractures. Input parameters include growth rate, fracture interaction characteristics, orientation maps and density maps. More traditional statistical stochastic fracture models are also incorporated. FRAME is designed to allow the geologist to input hard or soft data including seismically defined surfaces, well fractures, outcrop models, analogue or numerical mechanical models or geological {open_quotes}feeling{close_quotes}. The geologist is not restricted to {open_quotes}a priori{close_quotes} models of fracture patterns that may not correspond to the data.« less

  16. Towards an Australian ensemble streamflow forecasting system for flood prediction and water management

    NASA Astrophysics Data System (ADS)

    Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.

    2016-12-01

    Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.

  17. A stochastic flow-capturing model to optimize the location of fast-charging stations with uncertain electric vehicle flows

    DOE PAGES

    Wu, Fei; Sioshansi, Ramteen

    2017-05-04

    Here, we develop a model to optimize the location of public fast charging stations for electric vehicles (EVs). A difficulty in planning the placement of charging stations is uncertainty in where EV charging demands appear. For this reason, we use a stochastic flow-capturing location model (SFCLM). A sample-average approximation method and an averaged two-replication procedure are used to solve the problem and estimate the solution quality. We demonstrate the use of the SFCLM using a Central-Ohio based case study. We find that most of the stations built are concentrated around the urban core of the region. As the number ofmore » stations built increases, some appear on the outskirts of the region to provide an extended charging network. We find that the sets of optimal charging station locations as a function of the number of stations built are approximately nested. We demonstrate the benefits of the charging-station network in terms of how many EVs are able to complete their daily trips by charging midday—six public charging stations allow at least 60% of EVs that would otherwise not be able to complete their daily tours without the stations to do so. We finally compare the SFCLM to a deterministic model, in which EV flows are set equal to their expected values. We show that if a limited number of charging stations are to be built, the SFCLM outperforms the deterministic model. As the number of stations to be built increases, the SFCLM and deterministic model select very similar station locations.« less

  18. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  19. Continuum mesoscopic framework for multiple interacting species and processes on multiple site types and/or crystallographic planes.

    PubMed

    Chatterjee, Abhijit; Vlachos, Dionisios G

    2007-07-21

    While recently derived continuum mesoscopic equations successfully bridge the gap between microscopic and macroscopic physics, so far they have been derived only for simple lattice models. In this paper, general deterministic continuum mesoscopic equations are derived rigorously via nonequilibrium statistical mechanics to account for multiple interacting surface species and multiple processes on multiple site types and/or different crystallographic planes. Adsorption, desorption, reaction, and surface diffusion are modeled. It is demonstrated that contrary to conventional phenomenological continuum models, microscopic physics, such as the interaction potential, determines the final form of the mesoscopic equation. Models of single component diffusion and binary diffusion of interacting particles on single-type site lattice and of single component diffusion on complex microporous materials' lattices consisting of two types of sites are derived, as illustrations of the mesoscopic framework. Simplification of the diffusion mesoscopic model illustrates the relation to phenomenological models, such as the Fickian and Maxwell-Stefan transport models. It is demonstrated that the mesoscopic equations are in good agreement with lattice kinetic Monte Carlo simulations for several prototype examples studied.

  20. Comparison of two optimization algorithms for fuzzy finite element model updating for damage detection in a wind turbine blade

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2018-03-01

    vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.

  1. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  2. Deterministic nonclassicality for quantum-mechanical oscillators in thermal states

    NASA Astrophysics Data System (ADS)

    Marek, Petr; Lachman, Lukáš; Slodička, Lukáš; Filip, Radim

    2016-07-01

    Quantum nonclassicality is the basic building stone for the vast majority of quantum information applications and methods of its generation are at the forefront of research. One of the obstacles any method needs to clear is the looming presence of decoherence and noise which act against the nonclassicality and often erase it completely. In this paper we show that nonclassical states of a quantum harmonic oscillator initially in thermal equilibrium states can be deterministically created by coupling it to a single two-level system. This can be achieved even in the absorption regime in which the two-level system is initially in the ground state. The method is resilient to noise and it may actually benefit from it, as witnessed by the systems with higher thermal energy producing more nonclassical states.

  3. Impact of deterministic and stochastic updates on network reciprocity in the prisoner's dilemma game

    NASA Astrophysics Data System (ADS)

    Tanimoto, Jun

    2014-08-01

    In 2 × 2 prisoner's dilemma games, network reciprocity is one mechanism for adding social viscosity, which leads to cooperative equilibrium. This study introduced an intriguing framework for the strategy update rule that allows any combination of a purely deterministic method, imitation max (IM), and a purely probabilistic one, pairwise Fermi (Fermi-PW). A series of simulations covering the whole range from IM to Fermi-PW reveals that, as a general tendency, the larger fractions of stochastic updating reduce network reciprocity, so long as the underlying lattice contains no noise in the degree of distribution. However, a small amount of stochastic flavor added to an otherwise perfectly deterministic update rule was actually found to enhance network reciprocity. This occurs because a subtle stochastic effect in the update rule improves the evolutionary trail in games having more stag-hunt-type dilemmas, although the same stochastic effect degenerates evolutionary trails in games having more chicken-type dilemmas. We explain these effects by dividing evolutionary trails into the enduring and expanding periods defined by Shigaki et al. [Phys. Rev. E 86, 031141 (2012), 10.1103/PhysRevE.86.031141].

  4. From Genes to Ecosystems in Microbiology: Modeling Approaches and the Importance of Individuality

    PubMed Central

    Kreft, Jan-Ulrich; Plugge, Caroline M.; Prats, Clara; Leveau, Johan H. J.; Zhang, Weiwen; Hellweger, Ferdi L.

    2017-01-01

    Models are important tools in microbial ecology. They can be used to advance understanding by helping to interpret observations and test hypotheses, and to predict the effects of ecosystem management actions or a different climate. Over the past decades, biological knowledge and ecosystem observations have advanced to the molecular and in particular gene level. However, microbial ecology models have changed less and a current challenge is to make them utilize the knowledge and observations at the genetic level. We review published models that explicitly consider genes and make predictions at the population or ecosystem level. The models can be grouped into three general approaches, i.e., metabolic flux, gene-centric and agent-based. We describe and contrast these approaches by applying them to a hypothetical ecosystem and discuss their strengths and weaknesses. An important distinguishing feature is how variation between individual cells (individuality) is handled. In microbial ecosystems, individual heterogeneity is generated by a number of mechanisms including stochastic interactions of molecules (e.g., gene expression), stochastic and deterministic cell division asymmetry, small-scale environmental heterogeneity, and differential transport in a heterogeneous environment. This heterogeneity can then be amplified and transferred to other cell properties by several mechanisms, including nutrient uptake, metabolism and growth, cell cycle asynchronicity and the effects of age and damage. For example, stochastic gene expression may lead to heterogeneity in nutrient uptake enzyme levels, which in turn results in heterogeneity in intracellular nutrient levels. Individuality can have important ecological consequences, including division of labor, bet hedging, aging and sub-optimality. Understanding the importance of individuality and the mechanism(s) underlying it for the specific microbial system and question investigated is essential for selecting the optimal modeling strategy. PMID:29230200

  5. Broken flow symmetry explains the dynamics of small particles in deterministic lateral displacement arrays.

    PubMed

    Kim, Sung-Cheol; Wunsch, Benjamin H; Hu, Huan; Smith, Joshua T; Austin, Robert H; Stolovitzky, Gustavo

    2017-06-27

    Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input.

  6. Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations

    NASA Astrophysics Data System (ADS)

    Savran, William Harvey

    High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.

  7. Dynamical signatures of isometric force control as a function of age, expertise, and task constraints.

    PubMed

    Vieluf, Solveig; Sleimen-Malkoun, Rita; Voelcker-Rehage, Claudia; Jirsa, Viktor; Reuter, Eva-Maria; Godde, Ben; Temprado, Jean-Jacques; Huys, Raoul

    2017-07-01

    From the conceptual and methodological framework of the dynamical systems approach, force control results from complex interactions of various subsystems yielding observable behavioral fluctuations, which comprise both deterministic (predictable) and stochastic (noise-like) dynamical components. Here, we investigated these components contributing to the observed variability in force control in groups of participants differing in age and expertise level. To this aim, young (18-25 yr) as well as late middle-aged (55-65 yr) novices and experts (precision mechanics) performed a force maintenance and a force modulation task. Results showed that whereas the amplitude of force variability did not differ across groups in the maintenance tasks, in the modulation task it was higher for late middle-aged novices than for experts and higher for both these groups than for young participants. Within both tasks and for all groups, stochastic fluctuations were lowest where the deterministic influence was smallest. However, although all groups showed similar dynamics underlying force control in the maintenance task, a group effect was found for deterministic and stochastic fluctuations in the modulation task. The latter findings imply that both components were involved in the observed group differences in the variability of force fluctuations in the modulation task. These findings suggest that between groups the general characteristics of the dynamics do not differ in either task and that force control is more affected by age than by expertise. However, expertise seems to counteract some of the age effects. NEW & NOTEWORTHY Stochastic and deterministic dynamical components contribute to force production. Dynamical signatures differ between force maintenance and cyclic force modulation tasks but hardly between age and expertise groups. Differences in both stochastic and deterministic components are associated with group differences in behavioral variability, and observed behavioral variability is more strongly task dependent than person dependent. Copyright © 2017 the American Physiological Society.

  8. Stochastic and deterministic causes of streamer branching in liquid dielectrics

    NASA Astrophysics Data System (ADS)

    Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl

    2013-08-01

    Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.

  9. Evidence for determinism in species diversification and contingency in phenotypic evolution during adaptive radiation.

    PubMed

    Burbrink, Frank T; Chen, Xin; Myers, Edward A; Brandley, Matthew C; Pyron, R Alexander

    2012-12-07

    Adaptive radiation (AR) theory predicts that groups sharing the same source of ecological opportunity (EO) will experience deterministic species diversification and morphological evolution. Thus, deterministic ecological and morphological evolution should be correlated with deterministic patterns in the tempo and mode of speciation for groups in similar habitats and time periods. We test this hypothesis using well-sampled phylogenies of four squamate groups that colonized the New World (NW) in the Late Oligocene. We use both standard and coalescent models to assess species diversification, as well as likelihood models to examine morphological evolution. All squamate groups show similar early pulses of speciation, as well as diversity-dependent ecological limits on clade size at a continental scale. In contrast, processes of morphological evolution are not easily predictable and do not show similar pulses of early and rapid change. Patterns of morphological and species diversification thus appear uncoupled across these groups. This indicates that the processes that drive diversification and disparification are not mechanistically linked, even among similar groups of taxa experiencing the same sources of EO. It also suggests that processes of phenotypic diversification cannot be predicted solely from the existence of an AR or knowledge of the process of diversification.

  10. Automated Flight Routing Using Stochastic Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  11. Evidence for determinism in species diversification and contingency in phenotypic evolution during adaptive radiation

    PubMed Central

    Burbrink, Frank T.; Chen, Xin; Myers, Edward A.; Brandley, Matthew C.; Pyron, R. Alexander

    2012-01-01

    Adaptive radiation (AR) theory predicts that groups sharing the same source of ecological opportunity (EO) will experience deterministic species diversification and morphological evolution. Thus, deterministic ecological and morphological evolution should be correlated with deterministic patterns in the tempo and mode of speciation for groups in similar habitats and time periods. We test this hypothesis using well-sampled phylogenies of four squamate groups that colonized the New World (NW) in the Late Oligocene. We use both standard and coalescent models to assess species diversification, as well as likelihood models to examine morphological evolution. All squamate groups show similar early pulses of speciation, as well as diversity-dependent ecological limits on clade size at a continental scale. In contrast, processes of morphological evolution are not easily predictable and do not show similar pulses of early and rapid change. Patterns of morphological and species diversification thus appear uncoupled across these groups. This indicates that the processes that drive diversification and disparification are not mechanistically linked, even among similar groups of taxa experiencing the same sources of EO. It also suggests that processes of phenotypic diversification cannot be predicted solely from the existence of an AR or knowledge of the process of diversification. PMID:23034709

  12. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  13. Matrix Population Model for Estimating Effects from Time-Varying Aquatic Exposures: Technical Documentation

    EPA Science Inventory

    The Office of Pesticide Programs models daily aquatic pesticide exposure values for 30 years in its risk assessments. However, only a fraction of that information is typically used in these assessments. The population model employed herein is a deterministic, density-dependent pe...

  14. Gulf of Mexico dissolved oxygen model (GoMDOM) research and quality assurance project plan

    EPA Science Inventory

    An integrated high resolution mathematical modeling framework is being developed that will link hydrodynamic, atmospheric, and water quality models for the northern Gulf of Mexico. This Research and Quality Assurance Project Plan primarily focuses on the deterministic Gulf of Me...

  15. A combinatorial model of malware diffusion via bluetooth connections.

    PubMed

    Merler, Stefano; Jurman, Giuseppe

    2013-01-01

    We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression.

  16. Chaotic Lagrangian models for turbulent relative dispersion.

    PubMed

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  17. Chaotic Lagrangian models for turbulent relative dispersion

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  18. Evaluation of model-based seasonal streamflow and water allocation forecasts for the Elqui Valley, Chile

    NASA Astrophysics Data System (ADS)

    Delorit, Justin; Cristian Gonzalez Ortuya, Edmundo; Block, Paul

    2017-09-01

    In many semi-arid regions, multisectoral demands often stress available water supplies. Such is the case in the Elqui River valley of northern Chile, which draws on a limited-capacity reservoir to allocate 25 000 water rights. Delayed infrastructure investment forces water managers to address demand-based allocation strategies, particularly in dry years, which are realized through reductions in the volume associated with each water right. Skillful season-ahead streamflow forecasts have the potential to inform managers with an indication of future conditions to guide reservoir allocations. This work evaluates season-ahead statistical prediction models of October-January (growing season) streamflow at multiple lead times associated with manager and user decision points, and links predictions with a reservoir allocation tool. Skillful results (streamflow forecasts outperform climatology) are produced for short lead times (1 September: ranked probability skill score (RPSS) of 0.31, categorical hit skill score of 61 %). At longer lead times, climatological skill exceeds forecast skill due to fewer observations of precipitation. However, coupling the 1 September statistical forecast model with a sea surface temperature phase and strength statistical model allows for equally skillful categorical streamflow forecasts to be produced for a 1 May lead, triggered for 60 % of years (1950-2015), suggesting forecasts need not be strictly deterministic to be useful for water rights holders. An early (1 May) categorical indication of expected conditions is reinforced with a deterministic forecast (1 September) as more observations of local variables become available. The reservoir allocation model is skillful at the 1 September lead (categorical hit skill score of 53 %); skill improves to 79 % when categorical allocation prediction certainty exceeds 80 %. This result implies that allocation efficiency may improve when forecasts are integrated into reservoir decision frameworks. The methods applied here advance the understanding of the mechanisms and timing responsible for moisture transport to the Elqui Valley and provide a unique application of streamflow forecasting in the prediction of water right allocations.

  19. Ontology-Based Peer Exchange Network (OPEN)

    ERIC Educational Resources Information Center

    Dong, Hui

    2010-01-01

    In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…

  20. Extraction of process zones and low-dimensional attractive subspaces in stochastic fracture mechanics

    PubMed Central

    Kerfriden, P.; Schmidt, K.M.; Rabczuk, T.; Bordas, S.P.A.

    2013-01-01

    We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model. PMID:27069423

  1. Monopoly models with time-varying demand function

    NASA Astrophysics Data System (ADS)

    Cavalli, Fausto; Naimzada, Ahmad

    2018-05-01

    We study a family of monopoly models for markets characterized by time-varying demand functions, in which a boundedly rational agent chooses output levels on the basis of a gradient adjustment mechanism. After presenting the model for a generic framework, we analytically study the case of cyclically alternating demand functions. We show that both the perturbation size and the agent's reactivity to profitability variation signals can have counterintuitive roles on the resulting period-2 cycles and on their stability. In particular, increasing the perturbation size can have both a destabilizing and a stabilizing effect on the resulting dynamics. Moreover, in contrast with the case of time-constant demand functions, the agent's reactivity is not just destabilizing, but can improve stability, too. This means that a less cautious behavior can provide better performance, both with respect to stability and to achieved profits. We show that, even if the decision mechanism is very simple and is not able to always provide the optimal production decisions, achieved profits are very close to those optimal. Finally, we show that in agreement with the existing empirical literature, the price series obtained simulating the proposed model exhibit a significant deviation from normality and large volatility, in particular when underlying deterministic dynamics become unstable and complex.

  2. Pushing the Frontier of Data-Oriented Geodynamic Modeling: from Qualitative to Quantitative to Predictive

    NASA Astrophysics Data System (ADS)

    Liu, L.; Hu, J.; Zhou, Q.

    2016-12-01

    The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures also emerge from these models.

  3. Vaginal drug distribution modeling.

    PubMed

    Katz, David F; Yuan, Andrew; Gao, Yajing

    2015-09-15

    This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Deterministic folding: The role of entropic forces and steric specificities

    NASA Astrophysics Data System (ADS)

    da Silva, Roosevelt A.; da Silva, M. A. A.; Caliri, A.

    2001-03-01

    The inverse folding problem of proteinlike macromolecules is studied by using a lattice Monte Carlo (MC) model in which steric specificities (nearest-neighbors constraints) are included and the hydrophobic effect is treated explicitly by considering interactions between the chain and solvent molecules. Chemical attributes and steric peculiarities of the residues are encoded in a 10-letter alphabet and a correspondent "syntax" is provided in order to write suitable sequences for the specified target structures; twenty-four target configurations, chosen in order to cover all possible values of the average contact order χ (0.2381⩽χ⩽0.4947 for this system), were encoded and analyzed. The results, obtained by MC simulations, are strongly influenced by geometrical properties of the native configuration, namely χ and the relative number φ of crankshafts-type structures: For χ<0.35 the folding is deterministic, that is, the syntax is able to encode successful sequences: The system presents larger encodability, minimum sequence-target degeneracies and smaller characteristic folding time τf. For χ⩾0.35 the above results are not reproduced any more: The folding success is severely reduced, showing strong correlation with φ. Additionally, the existence of distinct characteristic folding times suggests that different mechanisms are acting at the same time in the folding process. The results (all obtained from the same single model, under the same "physiological conditions") resemble some general features of the folding problem, supporting the premise that the steric specificities, in association with the entropic forces (hydrophobic effect), are basic ingredients in the protein folding process.

  5. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  6. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  7. Low-frequency fluctuations in vertical cavity lasers: Experiments versus Lang-Kobayashi dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcini, Alessandro; Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, via Sansone 1, 50019 Sesto Fiorentino; Barland, Stephane

    2006-12-15

    The limits of applicability of the Lang-Kobayashi (LK) model for a semiconductor laser with optical feedback are analyzed. The model equations, equipped with realistic values of the parameters, are investigated below the solitary laser threshold where low-frequency fluctuations (LFF's) are usually observed. The numerical findings are compared with experimental data obtained for the selected polarization mode from a vertical cavity surface emitting laser (VCSEL) subject to polarization selective external feedback. The comparison reveals the bounds within which the dynamics of the LK model can be considered as realistic. In particular, it clearly demonstrates that the deterministic LK model, for realisticmore » values of the linewidth enhancement factor {alpha}, reproduces the LFF's only as a transient dynamics towards one of the stationary modes with maximal gain. A reasonable reproduction of real data from VCSEL's can be obtained only by considering the noisy LK or alternatively deterministic LK model for extremely high {alpha} values.« less

  8. Stochastic modelling of slow-progressing tumors: Analysis and applications to the cell interplay and control of low grade gliomas

    NASA Astrophysics Data System (ADS)

    Rodríguez, Clara Rojas; Fernández Calvo, Gabriel; Ramis-Conde, Ignacio; Belmonte-Beitia, Juan

    2017-08-01

    Tumor-normal cell interplay defines the course of a neoplastic malignancy. The outcome of this dual relation is the ultimate prevailing of one of the cells and the death or retreat of the other. In this paper we study the mathematical principles that underlay one important scenario: that of slow-progressing cancers. For this, we develop, within a stochastic framework, a mathematical model to account for tumor-normal cell interaction in such a clinically relevant situation and derive a number of deterministic approximations from the stochastic model. We consider in detail the existence and uniqueness of the solutions of the deterministic model and study the stability analysis. We then focus our model to the specific case of low grade gliomas, where we introduce an optimal control problem for different objective functionals under the administration of chemotherapy. We derive the conditions for which singular and bang-bang control exist and calculate the optimal control and states.

  9. FACTORS INFLUENCING TOTAL DIETARY EXPOSURE OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify critical input parameters to assess dietary intake of young children. The model was used as a framework for understanding important factors in data collection and analysis. Factors incorporated included transfer efficiencies of pest...

  10. Field Evaluation of an Avian Risk Assessment Model

    EPA Science Inventory

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in ...

  11. ZERODUR: deterministic approach for strength design

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.

  12. Observation and Kinematic Description of Long Actin Tracks Induced by Spherical Beads

    PubMed Central

    Kang, Hyeran; Perlmutter, David S.; Shenoy, Vivek B.; Tang, Jay X.

    2010-01-01

    We report an in vitro study comparing the growth of long actin tails induced by spherical beads coated with the verprolin central acidic domain of the polymerization enzyme N-WASP to that induced by Listeria monocytogenes in similar cellular extracts. The tracks behind the beads show characteristic differences in shape and curvature from those left by the bacteria, which have an elongated shape and a similar polymerization-inducing enzyme distributed only on the rear surface of the cell. The experimental tracks are simulated using a generalized kinematic model, which incorporates three modes of bead rotation with respect to the tail. The results show that the trajectories of spherical beads are mechanically deterministic rather than random, as suggested by stochastic models. Assessment of the bead rotation and its mechanistic basis offers insights into the biological function of actin-based motility. PMID:21044576

  13. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  14. The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation

    ERIC Educational Resources Information Center

    Brown, Scott D.; Heathcote, Andrew

    2008-01-01

    We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…

  15. Identifying influences on model uncertainty: an application using a forest carbon budget model

    Treesearch

    James E. Smith; Linda S. Heath

    2001-01-01

    Uncertainty is an important consideration for both developers and users of environmental simulation models. Establishing quantitative estimates of uncertainty for deterministic models can be difficult when the underlying bases for such information are scarce. We demonstrate an application of probabilistic uncertainty analysis that provides for refinements in...

  16. Application of Game Theory to Improve the Defense of the Smart Grid

    DTIC Science & Technology

    2012-03-01

    Computer Systems and Networks ...............................................22 2.4.2 Trust Models ...systems. In this environment, developers assumed deterministic communications mediums rather than the “best effort” models provided in most modern... models or computational models to validate the SPSs design. Finally, the study reveals concerns about the performance of load rejection schemes

  17. The experience of linking Victorian emergency medical service trauma data

    PubMed Central

    Boyle, Malcolm J

    2008-01-01

    Background The linking of a large Emergency Medical Service (EMS) dataset with the Victorian Department of Human Services (DHS) hospital datasets and Victorian State Trauma Outcome Registry and Monitoring (VSTORM) dataset to determine patient outcomes has not previously been undertaken in Victoria. The objective of this study was to identify the linkage rate of a large EMS trauma dataset with the Department of Human Services hospital datasets and VSTORM dataset. Methods The linking of an EMS trauma dataset to the hospital datasets utilised deterministic and probabilistic matching. The linking of three EMS trauma datasets to the VSTORM dataset utilised deterministic, probabilistic and manual matching. Results There were 66.7% of patients from the EMS dataset located in the VEMD. There were 96% of patients located in the VAED who were defined in the VEMD as being admitted to hospital. 3.7% of patients located in the VAED could not be found in the VEMD due to hospitals not reporting to the VEMD. For the EMS datasets, there was a 146% increase in successful links with the trauma profile dataset, a 221% increase in successful links with the mechanism of injury only dataset, and a 46% increase with sudden deterioration dataset, to VSTORM when using manual compared to deterministic matching. Conclusion This study has demonstrated that EMS data can be successfully linked to other health related datasets using deterministic and probabilistic matching with varying levels of success. The quality of EMS data needs to be improved to ensure better linkage success rates with other health related datasets. PMID:19014622

  18. Scaling in the Donangelo-Sneppen model for evolution of money

    NASA Astrophysics Data System (ADS)

    Stauffer, Dietrich; P. Radomski, Jan

    2001-03-01

    The evolution of money from unsuccessful barter attempts, as modeled by Donangelo and Sneppen, is modified by a deterministic instead of a probabilistic selection of the most desired product as money. We check in particular the characteristic times of the model as a function of system size.

  19. A Combinatorial Model of Malware Diffusion via Bluetooth Connections

    PubMed Central

    Merler, Stefano; Jurman, Giuseppe

    2013-01-01

    We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression. PMID:23555677

  20. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  1. A General Cognitive Diagnosis Model for Expert-Defined Polytomous Attributes

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy

    2013-01-01

    Polytomous attributes, particularly those defined as part of the test development process, can provide additional diagnostic information. The present research proposes the polytomous generalized deterministic inputs, noisy, "and" gate (pG-DINA) model to accommodate such attributes. The pG-DINA model allows input from substantive experts…

  2. Modeling the spreading of large-scale wildland fires

    Treesearch

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  3. A spatial stochastic programming model for timber and core area management under risk of stand-replacing fire

    Treesearch

    Dung Tuan Nguyen

    2012-01-01

    Forest harvest scheduling has been modeled using deterministic and stochastic programming models. Past models seldom address explicit spatial forest management concerns under the influence of natural disturbances. In this research study, we employ multistage full recourse stochastic programming models to explore the challenges and advantages of building spatial...

  4. Deterministic analysis of processes at corroding metal surfaces and the study of electrochemical noise in these systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latanision, R.M.

    1990-12-01

    Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministicmore » viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.« less

  5. Fumonisin B1 Toxicity in Grower-Finisher Pigs: A Comparative Analysis of Genetically Engineered Bt Corn and non-Bt Corn by Using Quantitative Dietary Exposure Assessment Modeling

    PubMed Central

    Delgado, James E.; Wolt, Jeffrey D.

    2011-01-01

    In this study, we investigate the long-term exposure (20 weeks) to fumonisin B1 (FB1) in grower-finisher pigs by conducting a quantitative exposure assessment (QEA). Our analytical approach involved both deterministic and semi-stochastic modeling for dietary comparative analyses of FB1 exposures originating from genetically engineered Bacillus thuringiensis (Bt)-corn, conventional non-Bt corn and distiller’s dried grains with solubles (DDGS) derived from Bt and/or non-Bt corn. Results from both deterministic and semi-stochastic demonstrated a distinct difference of FB1 toxicity in feed between Bt corn and non-Bt corn. Semi-stochastic results predicted the lowest FB1 exposure for Bt grain with a mean of 1.5 mg FB1/kg diet and the highest FB1 exposure for a diet consisting of non-Bt grain and non-Bt DDGS with a mean of 7.87 mg FB1/kg diet; the chronic toxicological incipient level of concern is 1.0 mg of FB1/kg of diet. Deterministic results closely mirrored but tended to slightly under predict the mean result for the semi-stochastic analysis. This novel comparative QEA model reveals that diet scenarios where the source of grain is derived from Bt corn presents less potential to induce FB1 toxicity than diets containing non-Bt corn. PMID:21909298

  6. Multielevation calibration of frequency-domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.

    2014-01-01

    Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.

  7. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  8. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Králik, Juraj, E-mail: juraj.kralik@stuba.sk

    2016-06-08

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  9. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    NASA Astrophysics Data System (ADS)

    Králik, Juraj

    2016-06-01

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  10. How synapses can enhance sensibility of a neural network

    NASA Astrophysics Data System (ADS)

    Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.

    2018-02-01

    In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.

  11. Plastoquinol diffusion in linear photosynthetic electron transport

    PubMed Central

    Mitchell, Rowan; Spillmann, Andreas; Haehnel, Wolfgang

    1990-01-01

    The diffusion of plastoquinol and its binding to the cytochrome bf complex, which occurs during linear photosynthetic electron transport and is analogous to reaction sequences found in most energy-converting membranes, has been studied in intact thylakoid membranes. The flash-induced electron transfer between the laterally separated photosystems II and photosystems I was measured by following the sigmoidal reduction kinetics of P-700+ after previous oxidation of the intersystem electron carriers. The amount of flash-induced plastoquinol produced at photosystem II was (a) reduced by inhibition with dichlorophenyl-dimethylurea and (b) increased by giving a second saturating flash. These signals were simulated by a new model which combines a deterministic simulation of reaction kinetics with a Monte Carlo approach to the diffusion of plastoquinol, taking into account the known structural features of the thylakoid membrane. The plastoquinol molecules were assumed to be oxidized by either a diffusion-limited or a nondiffusion-limited step in a collisional mechanism or after binding to the cytochrome bf complex. The model was able to account for the experimental observations with a nondiffusion-limited collisional mechanism or with a binding mechanism, giving minimum values for the diffusion coefficient of plastoquinol of 2 × 10-8 cm2s-1 and 3 × 10-7 cm2s-1, respectively. PMID:19431770

  12. Statistical mechanics of influence maximization with thermal noise

    NASA Astrophysics Data System (ADS)

    Lynn, Christopher W.; Lee, Daniel D.

    2017-03-01

    The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.

  13. Understanding and Predicting Geomagnetic Dipole Reversals Via Low Dimensional Models and Data Assimilation

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Fournier, A.; Hulot, G.

    2014-12-01

    We investigate the geophysical relevance of low-dimensional models of the geomagnetic dipole fieldby comparing these models to the signed relative paleomagnetic intensity over the past 2 Myr.The comparison is done via Bayesian statistics, implemented numerically by Monte Carlo (MC) sampling.We consider several MC schemes, as well as two data sets to show the robustness of our approach with respect to its numerical implementation and to the details of how the data are collected.The data we consider are the Sint-2000 [1] and PADM2M [2] data sets.We consider three stochastic differential equation (SDE) models and one deterministic model. Experiments with synthetic data show that it is feasible that a low dimensional modelcan learn the geophysical state from data of only the dipole field,and reveal the limitations of the low-dimensional models.For example, the G12 model [3] (a deterministic model that generates dipole reversals by crisis induced intermittency)can only match either one of the two important time scales we find in the data. The MC sampling approach also allows usto use the models to make predictions of the dipole field.We assess how reliably dipole reversals can be predictedwith our approach by hind-casting five reversals documented over the past 2 Myr. We find that, besides its limitations, G12 can be used to predict reversals reliably,however only with short lead times and over short horizons. The scalar SDE models on the other hand are not useful for prediction of dipole reversals.References Valet, J.P., Maynadier,L and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. Ziegler, L.B., Constable, C.G., Johnson, C.L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089. Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137.

  14. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    NASA Astrophysics Data System (ADS)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  15. Evolutionary dynamics of phenotype-structured populations: from individual-level mechanisms to population-level consequences

    NASA Astrophysics Data System (ADS)

    Chisholm, Rebecca H.; Lorenzi, Tommaso; Desvillettes, Laurent; Hughes, Barry D.

    2016-08-01

    Epigenetic mechanisms are increasingly recognised as integral to the adaptation of species that face environmental changes. In particular, empirical work has provided important insights into the contribution of epigenetic mechanisms to the persistence of clonal species, from which a number of verbal explanations have emerged that are suited to logical testing by proof-of-concept mathematical models. Here, we present a stochastic agent-based model and a related deterministic integrodifferential equation model for the evolution of a phenotype-structured population composed of asexually-reproducing and competing organisms which are exposed to novel environmental conditions. This setting has relevance to the study of biological systems where colonising asexual populations must survive and rapidly adapt to hostile environments, like pathogenesis, invasion and tumour metastasis. We explore how evolution might proceed when epigenetic variation in gene expression can change the reproductive capacity of individuals within the population in the new environment. Simulations and analyses of our models clarify the conditions under which certain evolutionary paths are possible and illustrate that while epigenetic mechanisms may facilitate adaptation in asexual species faced with environmental change, they can also lead to a type of "epigenetic load" and contribute to extinction. Moreover, our results offer a formal basis for the claim that constant environments favour individuals with low rates of stochastic phenotypic variation. Finally, our model provides a "proof of concept" of the verbal hypothesis that phenotypic stability is a key driver in rescuing the adaptive potential of an asexual lineage and supports the notion that intense selection pressure can, to an extent, offset the deleterious effects of high phenotypic instability and biased epimutations, and steer an asexual population back from the brink of an evolutionary dead end.

  16. Stream-power model of meander cutoff in gravel beds

    NASA Astrophysics Data System (ADS)

    Pannone, M.; De Vincenzo, A.

    2016-12-01

    In the present study we propose a one-dimensional model for the prediction of the large-time evolution of river meanders (pre-cutoff conditions) characterized by gravel bed and negligible suspended load. Due to its relatively simple structure, it may be a fast and easy tool to forecast the time evolution of a bend when the symptoms of an incipient instability suggest quantifying the time left for river exploitation as a naturalistic or a commercial resource and timely planning, if needed, the site management and restoration. Most of the previous research on meandering rivers focused on linearized theories that apply to very small bend amplitudes and very large radii of curvature. The dynamics of meander growth and cutoff was typically afforded by case-sensitive numerical simulations or by descriptive methods aimed at deriving purely empirical laws relating the hydraulics to some geomorphological parameters. The present approach combines the immediacy of a general analytical model with the accuracy of a fluid-mechanical background. The model focuses on energetic principles and interprets the mechanism of meander cutoff as the achievement of limit conditions in terms of river stream power. The corresponding analytical solution, which consists in a 1-D deterministic integro-differential equation governing the meander pre-cutoff phase, accounts for the influence of the morphological and sedimentological parameters by the downstream migration rate and the radius of the meander osculating circle. The results, expressed in terms of instable meander lifetime, are in good agreement with the data obtained from a number of field surveys documented in literature. Additionally, the proposed fluid-mechanical model allows identifying the physical mechanisms responsible for some peculiarities of large-time meander evolution like the decreasing skewness and asymmetry.

  17. Probabilistic Fracture Mechanics of Reactor Pressure Vessels with Populations of Flaws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Backman, Marie; Williams, Paul

    This report documents recent progress in developing a tool that uses the Grizzly and RAVEN codes to perform probabilistic fracture mechanics analyses of reactor pressure vessels in light water reactor nuclear power plants. The Grizzly code is being developed with the goal of creating a general tool that can be applied to study a variety of degradation mechanisms in nuclear power plant components. Because of the central role of the reactor pressure vessel (RPV) in a nuclear power plant, particular emphasis is being placed on developing capabilities to model fracture in embrittled RPVs to aid in the process surrounding decisionmore » making relating to life extension of existing plants. A typical RPV contains a large population of pre-existing flaws introduced during the manufacturing process. The use of probabilistic techniques is necessary to assess the likelihood of crack initiation at one or more of these flaws during a transient event. This report documents development and initial testing of a capability to perform probabilistic fracture mechanics of large populations of flaws in RPVs using reduced order models to compute fracture parameters. The work documented here builds on prior efforts to perform probabilistic analyses of a single flaw with uncertain parameters, as well as earlier work to develop deterministic capabilities to model the thermo-mechanical response of the RPV under transient events, and compute fracture mechanics parameters at locations of pre-defined flaws. The capabilities developed as part of this work provide a foundation for future work, which will develop a platform that provides the flexibility needed to consider scenarios that cannot be addressed with the tools used in current practice.« less

  18. A deterministic model of nettle caterpillar life cycle

    NASA Astrophysics Data System (ADS)

    Syukriyah, Y.; Nuraini, N.; Handayani, D.

    2018-03-01

    Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.

  19. Pest persistence and eradication conditions in a deterministic model for sterile insect release.

    PubMed

    Gordillo, Luis F

    2015-01-01

    The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population.

  20. Converting differential-equation models of biological systems to membrane computing.

    PubMed

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Coherent Two-Mode Dynamics of a Nanowire Force Sensor

    NASA Astrophysics Data System (ADS)

    Braakman, Floris R.; Rossi, Nicola; Tütüncüoglu, Gözde; Morral, Anna Fontcuberta i.; Poggio, Martino

    2018-05-01

    Classically coherent dynamics analogous to those of quantum two-level systems are studied in the setting of force sensing. We demonstrate quantitative control over the coupling between two orthogonal mechanical modes of a nanowire cantilever through measurement of avoided crossings as we deterministically position the nanowire inside an electric field. Furthermore, we demonstrate Rabi oscillations between the two mechanical modes in the strong-coupling regime. These results give prospects of implementing coherent two-mode control techniques for force-sensing signal enhancement.

  2. Enhanced Estimation of Terrestrial Loadings for TMDLs: Normalization Approach

    USDA-ARS?s Scientific Manuscript database

    TMDL implementation plans to remediate pathogen-impaired streams are usually based on deterministic terrestrial fate and transport (DTFT) models. A novel protocol is proposed that can effectively, efficiently, and explicitly capture the predictive uncertainty of DTFT models used to establish terres...

  3. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  4. Correlated disorder in the Kuramoto model: Effects on phase coherence, finite-size scaling, and dynamic fluctuations.

    PubMed

    Hong, Hyunsuk; O'Keeffe, Kevin P; Strogatz, Steven H

    2016-10-01

    We consider a mean-field model of coupled phase oscillators with quenched disorder in the natural frequencies and coupling strengths. A fraction p of oscillators are positively coupled, attracting all others, while the remaining fraction 1-p are negatively coupled, repelling all others. The frequencies and couplings are deterministically chosen in a manner which correlates them, thereby correlating the two types of disorder in the model. We first explore the effect of this correlation on the system's phase coherence. We find that there is a critical width γ c in the frequency distribution below which the system spontaneously synchronizes. Moreover, this γ c is independent of p. Hence, our model and the traditional Kuramoto model (recovered when p = 1) have the same critical width γ c . We next explore the critical behavior of the system by examining the finite-size scaling and the dynamic fluctuation of the traditional order parameter. We find that the model belongs to the same universality class as the Kuramoto model with deterministically (not randomly) chosen natural frequencies for the case of p < 1.

  5. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  6. A nonlinear dynamic age-structured model of e-commerce in spain: Stability analysis of the equilibrium by delay and stochastic perturbations

    NASA Astrophysics Data System (ADS)

    Burgos, C.; Cortés, J.-C.; Shaikhet, L.; Villanueva, R.-J.

    2018-11-01

    First, we propose a deterministic age-structured epidemiological model to study the diffusion of e-commerce in Spain. Afterwards, we determine the parameters (death, birth and growth rates) of the underlying demographic model as well as the parameters (transmission of the use of e-commerce rates) of the proposed epidemiological model that best fit real data retrieved from the Spanish National Statistical Institute. Motivated by the two following facts: first the dynamics of acquiring the use of a new technology as e-commerce is mainly driven by the feedback after interacting with our peers (family, friends, mates, mass media, etc.), hence having a certain delay, and second the inherent uncertainty of sampled real data and the social complexity of the phenomena under analysis, we introduce aftereffect and stochastic perturbations in the initial deterministic model. This leads to a delayed stochastic model for e-commerce. We then investigate sufficient conditions in order to guarantee the stability in probability of the equilibrium point of the dynamic e-commerce delayed stochastic model. Our theoretical findings are numerically illustrated using real data.

  7. Early signatures of regime shifts in gene expression dynamics

    NASA Astrophysics Data System (ADS)

    Pal, Mainak; Pal, Amit Kumar; Ghosh, Sayantari; Bose, Indrani

    2013-06-01

    Recently, a large number of studies have been carried out on the early signatures of sudden regime shifts in systems as diverse as ecosystems, financial markets, population biology and complex diseases. The signatures of regime shifts in gene expression dynamics are less systematically investigated. In this paper, we consider sudden regime shifts in the gene expression dynamics described by a fold-bifurcation model involving bistability and hysteresis. We consider two alternative models, models 1 and 2, of competence development in the bacterial population B. subtilis and determine some early signatures of the regime shifts between competence and noncompetence. We use both deterministic and stochastic formalisms for the purpose of our study. The early signatures studied include the critical slowing down as a transition point is approached, rising variance and the lag-1 autocorrelation function, skewness and a ratio of two mean first passage times. Some of the signatures could provide the experimental basis for distinguishing between bistability and excitability as the correct mechanism for the development of competence.

  8. Some applications of mathematics in golf.

    PubMed

    Otto, S R

    2017-08-01

    At its core, like many other sports, golf is a game of integers. The minimization of the number of strokes played is generally what determines the winner, whether each of these are associated with the shortest of putts or the longest of drives. The outcomes of these shots are influenced by very slight changes, but hopefully in a deterministic sense. Understanding the mechanics of golf necessitates the development of models and this is coupled more often than not to the use of statistics. In essence, the individual aspects of the sport can be modelled adequately via fairly simplistic models, but the presence of a human at one end of the kinematic chain has a significant impact on the variability of the entire process. In this paper, we will review some of the ways that mathematics has been used to develop the understanding of the physical processes involved in the sport, including some of the analysis which is exploited within the Equipment Rules. We will also discuss some of the future challenges.

  9. Sources of shaking and flooding during the Tohoku-Oki earthquake: a mixture of rupture styles

    USGS Publications Warehouse

    Wei, Shengji; Graves, Robert; Helmberger, Don; Avouac, Jean-Philippe; Jiang, Junle

    2012-01-01

    Modeling strong ground motions from great subduction zone earthquakes is one of the great challenges of computational seismology. To separate the rupture characteristics from complexities caused by 3D sub-surface geology requires an extraordinary data set such as provided by the recent Mw9.0 Tohoku-Oki earthquake. Here we combine deterministic inversion and dynamically guided forward simulation methods to model over one thousand high-rate GPS and strong motion observations from 0 to 0.25 Hz across the entire Honshu Island. Our results display distinct styles of rupture with a deeper generic interplate event (~Mw8.5) transitioning to a shallow tsunamigenic earthquake (~Mw9.0) at about 25 km depth in a process driven by a strong dynamic weakening mechanism, possibly thermal pressurization. This source model predicts many important features of the broad set of seismic, geodetic and seafloor observations providing a major advance in our understanding of such great natural hazards.

  10. Computational modeling of the nonlinear stochastic dynamics of horizontal drillstrings

    NASA Astrophysics Data System (ADS)

    Cunha, Americo; Soize, Christian; Sampaio, Rubens

    2015-11-01

    This work intends to analyze the nonlinear stochastic dynamics of drillstrings in horizontal configuration. For this purpose, it considers a beam theory, with effects of rotatory inertia and shear deformation, which is capable of reproducing the large displacements that the beam undergoes. The friction and shock effects, due to beam/borehole wall transversal impacts, as well as the force and torque induced by bit-rock interaction, are also considered in the model. Uncertainties of bit-rock interaction model are taken into account using a parametric probabilistic approach. Numerical simulations have shown that the mechanical system of interest has a very rich nonlinear stochastic dynamics, which generate phenomena such as bit-bounce, stick-slip, and transverse impacts. A study aiming to maximize the drilling process efficiency, varying drillstring velocities of translation and rotation is presented. Also, the work presents the definition and solution of two optimizations problems, one deterministic and one robust, where the objective is to maximize drillstring rate of penetration into the soil respecting its structural limits.

  11. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  12. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  13. Broken flow symmetry explains the dynamics of small particles in deterministic lateral displacement arrays

    PubMed Central

    Kim, Sung-Cheol; Wunsch, Benjamin H.; Hu, Huan; Smith, Joshua T.; Stolovitzky, Gustavo

    2017-01-01

    Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input. PMID:28607075

  14. Stochastic Stability of Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.

  15. Forecasting transitions in systems with high-dimensional stochastic complex dynamics: a linear stability analysis of the tangled nature model.

    PubMed

    Cairoli, Andrea; Piovani, Duccio; Jensen, Henrik Jeldtoft

    2014-12-31

    We propose a new procedure to monitor and forecast the onset of transitions in high-dimensional complex systems. We describe our procedure by an application to the tangled nature model of evolutionary ecology. The quasistable configurations of the full stochastic dynamics are taken as input for a stability analysis by means of the deterministic mean-field equations. Numerical analysis of the high-dimensional stability matrix allows us to identify unstable directions associated with eigenvalues with a positive real part. The overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean-field approximation is found to be a good early warning of the transitions occurring intermittently.

  16. Insect-inspired wing actuation structures based on ring-type resonators

    NASA Astrophysics Data System (ADS)

    Bolsman, Caspar T.; Goosen, Johannes F. L.; van Keulen, Fred

    2008-03-01

    In this paper, we illustrate and study the opportunities of resonant ring type structures as wing actuation mechanisms for a flapping wing Micro Air Vehicle (MAV). Various design alternatives are presented and studied based on computational and physical models. Insects provide an excellent source of inspiration for the development of the wing actuation mechanisms for flapping wing MAVs. The insect thorax is a structure which in essence provides a mechanism to couple the wing muscles to the wings while offering weight reduction through application of resonance, using tailored elasticity. The resonant properties of the thorax are a very effective way to reducing the power expenditure of wing movement. The wing movement itself is fairly complex and is guided by a set of control muscles and thoracic structures which are present in proximity of the wing root. The development of flapping wing MAVs requires a move away from classical structures and actuators. The use of gears and rotational electric motors is hard to justify at the small scale. Resonant structures provide a large design freedom whilst also providing various options for actuation. The move away from deterministic mechanisms offers possibilities for mass reduction.

  17. Multiple Scattering in Random Mechanical Systems and Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun

    2013-10-01

    This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.

  18. Vacuum effects over the closing of enterocutaneous fistulae: a mathematical modeling approach.

    PubMed

    Cattoni, D I; Chara, O

    2008-01-01

    Enterocutaneous fistulae are pathological communications between the intestinal lumen and the abdominal skin. Under surgery the mortality of this pathology is very high, therefore a vacuum applying system has been carried previously on attempting to close these fistulae. The objective of this article is the understanding of how these treatments might work through deterministic mathematical modelling. Four models are here proposed based on several assumptions involving: the conservation of the flow in the fistula, a low enough Reynolds number justifying a laminar flow, the use of Poiseuille law to model the movement of the fistulous liquid, as well as phenomenological equations including the fistula tissue and intermediate chamber compressibility. Interestingly, the four models show fistulae closing behaviour during experimental time (t<60 sec). To compare the models, both, simulations and pressure measurements, carried out on the vacuum connected to the patients, are performed. Time course of pressure are then simulated (from each model) and fitted to the experimental data. The model which best describes actual measurements shows exponential pumping flux kinetics. Applying this model, numerical relationship between the fistula compressibility and closure time is presented. The models here developed would contribute to clarify the treatment mechanism and, eventually, improve the fistulae treatment.

  19. Application of Stochastic and Deterministic Approaches to Modeling Interstellar Chemistry

    NASA Astrophysics Data System (ADS)

    Pei, Yezhe

    This work is about simulations of interstellar chemistry using the deterministic rate equation (RE) method and the stochastic moment equation (ME) method. Primordial metal-poor interstellar medium (ISM) is of our interest and the socalled “Population-II” stars could have been formed in this environment during the “Epoch of Reionization” in the baby universe. We build a gas phase model using the RE scheme to describe the ionization-powered interstellar chemistry. We demonstrate that OH replaces CO as the most abundant metal-bearing molecule in such interstellar clouds of the early universe. Grain surface reactions play an important role in the studies of astrochemistry. But the lack of an accurate yet effective simulation method still presents a challenge, especially for large, practical gas-grain system. We develop a hybrid scheme of moment equations and rate equations (HMR) for large gas-grain network to model astrochemical reactions in the interstellar clouds. Specifically, we have used a large chemical gas-grain model, with stochastic moment equations to treat the surface chemistry and deterministic rate equations to treat the gas phase chemistry, to simulate astrochemical systems as of the ISM in the Milky Way, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). We compare the results to those of pure rate equations and modified rate equations and present a discussion about how moment equations improve our theoretical modeling and how the abundances of the assorted species are changed by varied metallicity. We also model the observed composition of H2O, CO and CO2 ices toward Young Stellar Objects in the LMC and show that the HMR method gives a better match to the observation than the pure RE method.

  20. Determining Methane Budgets with Eddy Covariance Data ascertained in a heterogeneous Footprint

    NASA Astrophysics Data System (ADS)

    Rößger, N.; Wille, C.; Kutzbach, L.

    2016-12-01

    Amplified climate change in the Arctic may cause methane emissions to increase considerably due to more suitable production conditions. With a focus on methane, we studied the carbon turnover on the modern flood plain of Samoylov Island situated in the Lena River Delta (72°22'N, 126°28'E) using the eddy covariance data. In contrast to the ice-wedge polygonal tundra on the delta's river terraces, the flood plains have to date received little attention. During the warm season in 2014 and 2015, the mean methane flux amounted to 0.012 μmol m-2 s-1. This average is the result of a large variability in methane fluxes which is attributed to the complexity of the footprint where methane sources are unevenly distributed. Explaining this variability is based on three modelling approaches: a deterministic model using exponential relationships for flux drivers, a multilinear model created through stepwise regression and a neural network which relies on machine learning techniques. A substantial boost in model performance was achieved through inputting footprint information in the form of the contribution of vegetation classes; this indicates the vegetation is serving as an integrated proxy for potential methane flux drivers. The neural network performed best; however, a robust validation revealed that the deterministic model best captured ecosystem-intrinsic features. Furthermore, the deterministic model allowed a downscaling of the net flux by allocating fractions to three vegetation classes which in turn form the basis for upscaling methane fluxes in order to obtain the budget for the entire flood plain. Arctic methane emissions occur in a spatio-temporally complex pattern and employing fine-scale information is crucial to understanding the flux dynamics.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less

  2. ASP-based method for the enumeration of attractors in non-deterministic synchronous and asynchronous multi-valued networks.

    PubMed

    Ben Abdallah, Emna; Folschette, Maxime; Roux, Olivier; Magnin, Morgan

    2017-01-01

    This paper addresses the problem of finding attractors in biological regulatory networks. We focus here on non-deterministic synchronous and asynchronous multi-valued networks, modeled using automata networks (AN). AN is a general and well-suited formalism to study complex interactions between different components (genes, proteins,...). An attractor is a minimal trap domain, that is, a part of the state-transition graph that cannot be escaped. Such structures are terminal components of the dynamics and take the form of steady states (singleton) or complex compositions of cycles (non-singleton). Studying the effect of a disease or a mutation on an organism requires finding the attractors in the model to understand the long-term behaviors. We present a computational logical method based on answer set programming (ASP) to identify all attractors. Performed without any network reduction, the method can be applied on any dynamical semantics. In this paper, we present the two most widespread non-deterministic semantics: the asynchronous and the synchronous updating modes. The logical approach goes through a complete enumeration of the states of the network in order to find the attractors without the necessity to construct the whole state-transition graph. We realize extensive computational experiments which show good performance and fit the expected theoretical results in the literature. The originality of our approach lies on the exhaustive enumeration of all possible (sets of) states verifying the properties of an attractor thanks to the use of ASP. Our method is applied to non-deterministic semantics in two different schemes (asynchronous and synchronous). The merits of our methods are illustrated by applying them to biological examples of various sizes and comparing the results with some existing approaches. It turns out that our approach succeeds to exhaustively enumerate on a desktop computer, in a large model (100 components), all existing attractors up to a given size (20 states). This size is only limited by memory and computation time.

  3. Contextuality, Nonlocality and Counterfactual Arguments

    NASA Astrophysics Data System (ADS)

    Ghirardi, Gian Carlo; Wienand, Karl

    2009-07-01

    In this paper, following an elementary line of thought which somewhat differs from the usual one, we prove once more that any deterministic theory predictively equivalent to quantum mechanics unavoidably exhibits a contextual character. The purpose of adopting this perspective is that of paving the way for a critical analysis of the use of counterfactual arguments when dealing with nonlocal physical processes.

  4. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.

    PubMed

    Kurhekar, Manish; Deshpande, Umesh

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website.

  5. Nascent RNA kinetics: Transient and steady state behavior of models of transcription

    NASA Astrophysics Data System (ADS)

    Choubey, Sandeep

    2018-02-01

    Regulation of transcription is a vital process in cells, but mechanistic details of this regulation still remain elusive. The dominant approach to unravel the dynamics of transcriptional regulation is to first develop mathematical models of transcription and then experimentally test the predictions these models make for the distribution of mRNA and protein molecules at the individual cell level. However, these measurements are affected by a multitude of downstream processes which make it difficult to interpret the measurements. Recent experimental advancements allow for counting the nascent mRNA number of a gene as a function of time at the single-inglr cell level. These measurements closely reflect the dynamics of transcription. In this paper, we consider a general mechanism of transcription with stochastic initiation and deterministic elongation and probe its impact on the temporal behavior of nascent RNA levels. Using techniques from queueing theory, we derive exact analytical expressions for the mean and variance of the nascent RNA distribution as functions of time. We apply these analytical results to obtain the mean and variance of nascent RNA distribution for specific models of transcription. These models of initiation exhibit qualitatively distinct transient behaviors for both the mean and variance which further allows us to discriminate between them. Stochastic simulations confirm these results. Overall the analytical results presented here provide the necessary tools to connect mechanisms of transcription initiation to single-cell measurements of nascent RNA.

  6. Effect of smoking parameters on the particle size distribution and predicted airway deposition of mainstream cigarette smoke.

    PubMed

    Kane, David B; Asgharian, Bahman; Price, Owen T; Rostami, Ali; Oldham, Michael J

    2010-02-01

    It is known that puffing conditions such as puff volume, duration, and frequency vary substantially among individual smokers. This study investigates how these parameters affect the particle size distribution and concentration of fresh mainstream cigarette smoke (MCS) and how these changes affect the predicted deposition of MCS particles in a model human respiratory tract. Measurements of the particle size distribution made with an electrical low pressure impactor for a variety of puffing conditions are presented. The average flow rate of the puff is found to be the major factor effecting the measured particle size distribution of the MCS. The results of these measurements were then used as input to a deterministic dosimetry model (MPPD) to estimate the changes in the respiratory tract deposition fraction of smoke particles. The MPPD dosimetry model was modified by incorporating mechanisms involved in respiratory tract deposition of MCS: hygroscopic growth, coagulation, evaporation of semivolatiles, and mixing of the smoke with inhaled dilution air. The addition of these mechanisms to MPPD resulted in reasonable agreement between predicted airway deposition and human smoke retention measurements. The modified MPPD model predicts a modest 10% drop in the total deposition efficiency in a model human respiratory tract as the puff flow rate is increased from 1050 to 3100 ml/min, for a 2-s puff.

  7. Stochastic population dynamic models as probability networks

    Treesearch

    M.E. and D.C. Lee Borsuk

    2009-01-01

    The dynamics of a population and its response to environmental change depend on the balance of birth, death and age-at-maturity, and there have been many attempts to mathematically model populations based on these characteristics. Historically, most of these models were deterministic, meaning that the results were strictly determined by the equations of the model and...

  8. The Random-Effect DINA Model

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…

  9. An Estimation Procedure for the Structural Parameters of the Unified Cognitive/IRT Model.

    ERIC Educational Resources Information Center

    Jiang, Hai; And Others

    L. V. DiBello, W. F. Stout, and L. A. Roussos (1993) have developed a new item response model, the Unified Model, which brings together the discrete, deterministic aspects of cognition favored by cognitive scientists, and the continuous, stochastic aspects of test response behavior that underlie item response theory (IRT). The Unified Model blends…

  10. Arctic Sea Ice: Trends, Stability and Variability

    NASA Astrophysics Data System (ADS)

    Moon, Woosok

    A stochastic Arctic sea-ice model is derived and analyzed in detail to interpret the recent decay and associated variability of Arctic sea-ice under changes in greenhouse gas forcing widely referred to as global warming. The approach begins from a deterministic model of the heat flux balance through the air/sea/ice system, which uses observed monthly-averaged heat fluxes to drive a time evolution of sea-ice thickness. This model reproduces the observed seasonal cycle of the ice cover and it is to this that stochastic noise---representing high frequency variability---is introduced. The model takes the form of a single periodic non-autonomous stochastic ordinary differential equation. Following an introductory chapter, the two that follow focus principally on the properties of the deterministic model in order to identify the main properties governing the stability of the ice cover. In chapter 2 the underlying time-dependent solutions to the deterministic model are analyzed for their stability. It is found that the response time-scale of the system to perturbations is dominated by the destabilizing sea-ice albedo feedback, which is operative in the summer, and the stabilizing long wave radiative cooling of the ice surface, which is operative in the winter. This basic competition is found throughout the thesis to define the governing dynamics of the system. In particular, as greenhouse gas forcing increases, the sea-ice albedo feedback becomes more effective at destabilizing the system. Thus, any projections of the future state of Arctic sea-ice will depend sensitively on the treatment of the ice-albedo feedback. This in turn implies that the treatment a fractional ice cover as the ice areal extent changes rapidly, must be handled with the utmost care. In chapter 3, the idea of a two-season model, with just winter and summer, is revisited. By breaking the seasonal cycle up in this manner one can simplify the interpretation of the basic dynamics. Whereas in the fully time-dependent seasonal model one finds stable seasonal ice cover (vanishing in the summer but reappearing in the winter), in previous two-season models such a state could not be found. In this chapter the sufficient conditions are found for a stable seasonal ice cover, which reside in including a time variation in the shortwave radiance during summer. This provides a qualitative interpretation of the continuous and reversible shift from perennial to seasonally-varying states in the more complex deterministic model. In order to put the stochastic model into a realistic observational framework, in chapter 4, the analysis of daily satellite retrievals of ice albedo and ice extent is described. Both the basic statistics are examined and a new method, called multi-fractal temporally weighted detrended fluctuation analysis, is applied. Because the basic data are taken on daily time scales, the full fidelity of the retrieved data is accessed and we find time scales from days and weeks to seasonal and decadal. Importantly, the data show a white-noise structure on annual to biannual time scales and this provides the basis for using a Wiener process for the noise in the stochastic Arctic sea-ice model. In chapter 5 a generalized perturbation analysis of a non-autonomous stochastic differential equation is developed and then applied to interpreting the variability of Arctic sea-ice as greenhouse gas forcing increases. The resulting analytic expressions of the statistical moments provide insight into the transient and memory-delay effects associated with the basic competition in the system: the ice-albedo feedback and long wave radiative stabilization along with the asymmetry in the nonlinearity of the deterministic contributions to the model and the magnitude and structure of the stochastic noise. A systematic study of the impact of the noise structure, from additive to multiplicative, is undertaken in chapters 6 and 7. Finally, in chapter 8 the matter of including a fractional ice cover into a deterministic model is addressed. It is found that a simple but crucial mistake is made in one of the most widely used model schemes and this has a major impact given the important role of areal fraction in the ice-albedo feedback in such a model. The thesis is summarized in chapter 9.

  11. Jitter and phase noise of ADPLL due to PSN with deterministic frequency

    NASA Astrophysics Data System (ADS)

    Deng, Xiaoying; Yang, Jun; Wu, Jianhui

    2011-09-01

    In this article, jitter and phase noise of all-digital phase-locked loop due to power supply noise (PSN) with deterministic frequency are analysed. It leads to the conclusion that jitter and phase noise heavily depend on the noise frequency. Compared with jitter, phase noise is much less affected by the deterministic PSN. Our method is utilised to study a CMOS ADPLL designed and simulated in SMIC 0.13 µm standard CMOS process. A comparison between the results obtained by our method and those obtained by simulation and measurement proves the accuracy of the predicted model. When the digital controlled oscillator was corrupted by PSN with 100 mVpk-pk, the measured jitters were 33.9 ps at the rate of fG = 192 MHz and 148.5 ps at the rate of fG = 40 MHz. However, the measured phase noise was exactly the same except for two impulses appearing at 192 and 40 MHz, respectively.

  12. JCCRER Project 2.3 -- Deterministic effects of occupational exposure to radiation. Phase 1: Feasibility study; Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okladnikova, N.; Pesternikova, V.; Sumina, M.

    1998-12-01

    Phase 1 of Project 2.3, a short-term collaborative Feasibility Study, was funded for 12 months starting on 1 February 1996. The overall aim of the study was to determine the practical feasibility of using the dosimetric and clinical data on the MAYAK worker population to study the deterministic effects of exposure to external gamma radiation and to internal alpha radiation from inhaled plutonium. Phase 1 efforts were limited to the period of greatest worker exposure (1948--1954) and focused on collaboratively: assessing the comprehensiveness, availability, quality, and suitability of the Russian clinical and dosimetric data for the study of deterministic effects;more » creating an electronic data base containing complete clinical and dosimetric data on a small, representative sample of MAYAK workers; developing computer software for the testing of a currently used health risk model of hematopoietic effects; and familiarizing the US team with the Russian diagnostic criteria and techniques used in the identification of Chronic Radiation Sickness.« less

  13. How the growth rate of host cells affects cancer risk in a deterministic way

    NASA Astrophysics Data System (ADS)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  14. Deterministic multi-step rotation of magnetic single-domain state in Nickel nanodisks using multiferroic magnetoelastic coupling

    NASA Astrophysics Data System (ADS)

    Sohn, Hyunmin; Liang, Cheng-yen; Nowakowski, Mark E.; Hwang, Yongha; Han, Seungoh; Bokor, Jeffrey; Carman, Gregory P.; Candler, Robert N.

    2017-10-01

    We demonstrate deterministic multi-step rotation of a magnetic single-domain (SD) state in Nickel nanodisks using the multiferroic magnetoelastic effect. Ferromagnetic Nickel nanodisks are fabricated on a piezoelectric Lead Zirconate Titanate (PZT) substrate, surrounded by patterned electrodes. With the application of a voltage between opposing electrode pairs, we generate anisotropic in-plane strains that reshape the magnetic energy landscape of the Nickel disks, reorienting magnetization toward a new easy axis. By applying a series of voltages sequentially to adjacent electrode pairs, circulating in-plane anisotropic strains are applied to the Nickel disks, deterministically rotating a SD state in the Nickel disks by increments of 45°. The rotation of the SD state is numerically predicted by a fully-coupled micromagnetic/elastodynamic finite element analysis (FEA) model, and the predictions are experimentally verified with magnetic force microscopy (MFM). This experimental result will provide a new pathway to develop energy efficient magnetic manipulation techniques at the nanoscale.

  15. Assimilation of lightning data by nudging tropospheric water vapor and applications to numerical forecasts of convective events

    NASA Astrophysics Data System (ADS)

    Dixon, Kenneth

    A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.

  16. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  17. Lévy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Boyer, D.; Miramontes, O.; Larralde, H.

    2009-10-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  18. Diagnostic Models as Partially Ordered Sets

    ERIC Educational Resources Information Center

    Tatsuoka, Curtis

    2009-01-01

    In this commentary, the author addresses what is referred to as the deterministic input, noisy "and" gate (DINA) model. The author mentions concerns with how this model has been formulated and presented. In particular, the author points out that there is a lack of recognition of the confounding of profiles that generally arises and then discusses…

  19. PLANNING MODELS FOR URBAN WATER SUPPLY EXPANSION. VOLUME 1. PLANNING FOR THE EXPANSION OF REGIONAL WATER SUPPLY SYSTEMS

    EPA Science Inventory

    A three-volume report was developed relative to the modelling of investment strategies for regional water supply planning. Volume 1 is the study of capacity expansion over time. Models to aid decision making for the deterministic case are presented, and a planning process under u...

  20. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

Top