Sample records for continuous deterministic models

  1. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  2. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  3. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  4. Understanding Rasch Measurement: Rasch Models Overview.

    ERIC Educational Resources Information Center

    Wright, Benjamin D.; Mok, Magdalena

    2000-01-01

    Presents an overview of Rasch measurement models that begins with a conceptualization of continuous experiences often captured as discrete observations. Discusses the mathematical properties of the Rasch family of models that allow the transformation of discrete deterministic counts into continuous probabilistic abstractions. Also discusses six of…

  5. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  6. Stochastic simulations on a model of circadian rhythm generation.

    PubMed

    Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin

    2008-01-01

    Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.

  7. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    PubMed Central

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  8. A Stochastic Tick-Borne Disease Model: Exploring the Probability of Pathogen Persistence.

    PubMed

    Maliyoni, Milliward; Chirove, Faraimunashe; Gaff, Holly D; Govinder, Keshlan S

    2017-09-01

    We formulate and analyse a stochastic epidemic model for the transmission dynamics of a tick-borne disease in a single population using a continuous-time Markov chain approach. The stochastic model is based on an existing deterministic metapopulation tick-borne disease model. We compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in tick-borne disease dynamics. The probability of disease extinction and that of a major outbreak are computed and approximated using the multitype Galton-Watson branching process and numerical simulations, respectively. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that a disease outbreak is more likely if the disease is introduced by infected deer as opposed to infected ticks. These insights demonstrate the importance of host movement in the expansion of tick-borne diseases into new geographic areas.

  9. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  10. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  11. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  12. Deterministic and stochastic models for middle east respiratory syndrome (MERS)

    NASA Astrophysics Data System (ADS)

    Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning

    2018-03-01

    World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.

  13. MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.

    PubMed

    Lok, Judith J

    2017-04-01

    In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.

  14. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  15. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; MacLaurin, James

    2018-06-01

    Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .

  16. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  17. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    PubMed

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  18. Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.

    2012-04-23

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less

  19. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  20. An Estimation Procedure for the Structural Parameters of the Unified Cognitive/IRT Model.

    ERIC Educational Resources Information Center

    Jiang, Hai; And Others

    L. V. DiBello, W. F. Stout, and L. A. Roussos (1993) have developed a new item response model, the Unified Model, which brings together the discrete, deterministic aspects of cognition favored by cognitive scientists, and the continuous, stochastic aspects of test response behavior that underlie item response theory (IRT). The Unified Model blends…

  1. Hybrid quantum teleportation: A theoretical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria

    2014-12-04

    Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.

  2. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    PubMed

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  3. Chaotic sources of noise in machine acoustics

    NASA Astrophysics Data System (ADS)

    Moon, F. C., Prof.; Broschart, Dipl.-Ing. T.

    1994-05-01

    In this paper a model is posited for deterministic, random-like noise in machines with sliding rigid parts impacting linear continuous machine structures. Such problems occur in gear transmission systems. A mathematical model is proposed to explain the random-like structure-borne and air-borne noise from such systems when the input is a periodic deterministic excitation of the quasi-rigid impacting parts. An experimental study is presented which supports the model. A thin circular plate is impacted by a chaotically vibrating mass excited by a sinusoidal moving base. The results suggest that the plate vibrations might be predicted by replacing the chaotic vibrating mass with a probabilistic forcing function. Prechaotic vibrations of the impacting mass show classical period doubling phenomena.

  4. Fast-slow asymptotics for a Markov chain model of fast sodium current

    NASA Astrophysics Data System (ADS)

    Starý, Tomáš; Biktashev, Vadim N.

    2017-09-01

    We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.

  5. Towards an Australian ensemble streamflow forecasting system for flood prediction and water management

    NASA Astrophysics Data System (ADS)

    Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.

    2016-12-01

    Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.

  6. Deterministic Impulsive Vacuum Foundations for Quantum-Mechanical Wavefunctions

    NASA Astrophysics Data System (ADS)

    Valentine, John S.

    2013-09-01

    By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model's deterministic foundations.

  7. Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján

    2017-06-01

    It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.

  8. Stochastic modelling of microstructure formation in solidification processes

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu; Stefanescu, Doru M.

    1997-07-01

    To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'

  9. Stochastic oscillations in models of epidemics on a network of cities

    NASA Astrophysics Data System (ADS)

    Rozhnova, G.; Nunes, A.; McKane, A. J.

    2011-11-01

    We carry out an analytic investigation of stochastic oscillations in a susceptible-infected-recovered model of disease spread on a network of n cities. In the model a fraction fjk of individuals from city k commute to city j, where they may infect, or be infected by, others. Starting from a continuous-time Markov description of the model the deterministic equations, which are valid in the limit when the population of each city is infinite, are recovered. The stochastic fluctuations about the fixed point of these equations are derived by use of the van Kampen system-size expansion. The fixed point structure of the deterministic equations is remarkably simple: A unique nontrivial fixed point always exists and has the feature that the fraction of susceptible, infected, and recovered individuals is the same for each city irrespective of its size. We find that the stochastic fluctuations have an analogously simple dynamics: All oscillations have a single frequency, equal to that found in the one-city case. We interpret this phenomenon in terms of the properties of the spectrum of the matrix of the linear approximation of the deterministic equations at the fixed point.

  10. Ensemble assimilation of ARGO temperature profile, sea surface temperature, and altimetric satellite data into an eddy permitting primitive equation model of the North Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Candille, G.; Brankart, J. M.; Brasseur, P.

    2015-07-01

    Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. An incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semiindependent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analyzed jointly. The consistency and complementarity between both validations are highlighted.

  11. Broken flow symmetry explains the dynamics of small particles in deterministic lateral displacement arrays.

    PubMed

    Kim, Sung-Cheol; Wunsch, Benjamin H; Hu, Huan; Smith, Joshua T; Austin, Robert H; Stolovitzky, Gustavo

    2017-06-27

    Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input.

  12. Panel summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.

    1987-04-01

    The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less

  13. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part I Deterministic Models. Part II, Chapter 3.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Addressing the question of effective models to measure change and the change process, the author suggests that linear structural equation systems may be viewed as steady state outcomes of continuous-change models and have rich sociological grounding. Two interpretations of the…

  14. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  15. Broken flow symmetry explains the dynamics of small particles in deterministic lateral displacement arrays

    PubMed Central

    Kim, Sung-Cheol; Wunsch, Benjamin H.; Hu, Huan; Smith, Joshua T.; Stolovitzky, Gustavo

    2017-01-01

    Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input. PMID:28607075

  16. Stochastic Stability of Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.

  17. Periodicity and chaos from switched flow systems - Contrasting examples of discretely controlled continuous systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joseph; Ramadge, Peter J.

    1993-01-01

    We analyze two examples of the discrete control of a continuous variable system. These examples exhibit what may be regarded as the two extremes of complexity of the closed-loop behavior: one is eventually periodic, the other is chaotic. Our examples are derived from sampled deterministic flow models. These are of interest in their own right but have also been used as models for certain aspects of manufacturing systems. In each case, we give a precise characterization of the closed-loop behavior.

  18. Converting differential-equation models of biological systems to membrane computing.

    PubMed

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  20. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  1. Ensemble assimilation of ARGO temperature profile, sea surface temperature and Altimetric satellite data into an eddy permitting primitive equation model of the North Atlantic ocean

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2015-04-01

    Sea surface height, sea surface temperature and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. 60 ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. Incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with observations used in the assimilation experiments and independent observations, which goes further than most previous studies and constitutes one of the original points of this paper. Regarding the deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations in order to diagnose the ensemble distribution properties in a deterministic way. Regarding the probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analysed jointly. The consistency and complementarity between both validations are highlighted. High reliable situations, in which the RMS error and the CRPS give the same information, are identified for the first time in this paper.

  2. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  3. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  4. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  5. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  6. Stochastic and deterministic model of microbial heat inactivation.

    PubMed

    Corradini, Maria G; Normand, Mark D; Peleg, Micha

    2010-03-01

    Microbial inactivation is described by a model based on the changing survival probabilities of individual cells or spores. It is presented in a stochastic and discrete form for small groups, and as a continuous deterministic model for larger populations. If the underlying mortality probability function remains constant throughout the treatment, the model generates first-order ("log-linear") inactivation kinetics. Otherwise, it produces survival patterns that include Weibullian ("power-law") with upward or downward concavity, tailing with a residual survival level, complete elimination, flat "shoulder" with linear or curvilinear continuation, and sigmoid curves. In both forms, the same algorithm or model equation applies to isothermal and dynamic heat treatments alike. Constructing the model does not require assuming a kinetic order or knowledge of the inactivation mechanism. The general features of its underlying mortality probability function can be deduced from the experimental survival curve's shape. Once identified, the function's coefficients, the survival parameters, can be estimated directly from the experimental survival ratios by regression. The model is testable in principle but matching the estimated mortality or inactivation probabilities with those of the actual cells or spores can be a technical challenge. The model is not intended to replace current models to calculate sterility. Its main value, apart from connecting the various inactivation patterns to underlying probabilities at the cellular level, might be in simulating the irregular survival patterns of small groups of cells and spores. In principle, it can also be used for nonthermal methods of microbial inactivation and their combination with heat.

  7. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  8. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  9. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  10. Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong

    The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.

  11. Network-level reproduction number and extinction threshold for vector-borne diseases.

    PubMed

    Xue, Ling; Scoglio, Caterina

    2015-06-01

    The basic reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or not. Thresholds for disease extinction contribute crucial knowledge of disease control, elimination, and mitigation of infectious diseases. Relationships between basic reproduction numbers of two deterministic network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding stochastic continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing that the relationships may always exist and proposing a mathematical problem for proving existence of the relationships in general. Moreover, numerical simulations show that the basic reproduction number does not monotonically increase or decrease with the extinction threshold. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.

  12. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  13. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  14. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  15. Location of coating defects and assessment of level of cathodic protection on underground pipelines using AC impedance, deterministic and non-deterministic models

    NASA Astrophysics Data System (ADS)

    Castaneda-Lopez, Homero

    A methodology for detecting and locating defects or discontinuities on the outside covering of coated metal underground pipelines subjected to cathodic protection has been addressed. On the basis of wide range AC impedance signals for various frequencies applied to a steel-coated pipeline system and by measuring its corresponding transfer function under several laboratory simulation scenarios, a physical laboratory setup of an underground cathodic-protected, coated pipeline was built. This model included different variables and elements that exist under real conditions, such as soil resistivity, soil chemical composition, defect (holiday) location in the pipeline covering, defect area and geometry, and level of cathodic protection. The AC impedance data obtained under different working conditions were used to fit an electrical transmission line model. This model was then used as a tool to fit the impedance signal for different experimental conditions and to establish trends in the impedance behavior without the necessity of further experimental work. However, due to the chaotic nature of the transfer function response of this system under several conditions, it is believed that non-deterministic models based on pattern recognition algorithms are suitable for field condition analysis. A non-deterministic approach was used for experimental analysis by applying an artificial neural network (ANN) algorithm based on classification analysis capable of studying the pipeline system and differentiating the variables that can change impedance conditions. These variables include level of cathodic protection, location of discontinuities (holidays), and severity of corrosion. This work demonstrated a proof-of-concept for a well-known technique and a novel algorithm capable of classifying impedance data for experimental results to predict the exact location of the active holidays and defects on the buried pipelines. Laboratory findings from this procedure are promising, and efforts to develop it for field conditions should continue.

  16. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  17. Oscillatory regulation of Hes1: Discrete stochastic delay modelling and simulation.

    PubMed

    Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai

    2006-09-08

    Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.

  18. Dynamical Localization for Unitary Anderson Models

    NASA Astrophysics Data System (ADS)

    Hamza, Eman; Joye, Alain; Stolz, Günter

    2009-11-01

    This paper establishes dynamical localization properties of certain families of unitary random operators on the d-dimensional lattice in various regimes. These operators are generalizations of one-dimensional physical models of quantum transport and draw their name from the analogy with the discrete Anderson model of solid state physics. They consist in a product of a deterministic unitary operator and a random unitary operator. The deterministic operator has a band structure, is absolutely continuous and plays the role of the discrete Laplacian. The random operator is diagonal with elements given by i.i.d. random phases distributed according to some absolutely continuous measure and plays the role of the random potential. In dimension one, these operators belong to the family of CMV-matrices in the theory of orthogonal polynomials on the unit circle. We implement the method of Aizenman-Molchanov to prove exponential decay of the fractional moments of the Green function for the unitary Anderson model in the following three regimes: In any dimension, throughout the spectrum at large disorder and near the band edges at arbitrary disorder and, in dimension one, throughout the spectrum at arbitrary disorder. We also prove that exponential decay of fractional moments of the Green function implies dynamical localization, which in turn implies spectral localization. These results complete the analogy with the self-adjoint case where dynamical localization is known to be true in the same three regimes.

  19. Fractional dynamics of globally slow transcription and its impact on deterministic genetic oscillation.

    PubMed

    Wei, Kun; Gao, Shilong; Zhong, Suchuan; Ma, Hong

    2012-01-01

    In dynamical systems theory, a system which can be described by differential equations is called a continuous dynamical system. In studies on genetic oscillation, most deterministic models at early stage are usually built on ordinary differential equations (ODE). Therefore, gene transcription which is a vital part in genetic oscillation is presupposed to be a continuous dynamical system by default. However, recent studies argued that discontinuous transcription might be more common than continuous transcription. In this paper, by appending the inserted silent interval lying between two neighboring transcriptional events to the end of the preceding event, we established that the running time for an intact transcriptional event increases and gene transcription thus shows slow dynamics. By globally replacing the original time increment for each state increment by a larger one, we introduced fractional differential equations (FDE) to describe such globally slow transcription. The impact of fractionization on genetic oscillation was then studied in two early stage models--the Goodwin oscillator and the Rössler oscillator. By constructing a "dual memory" oscillator--the fractional delay Goodwin oscillator, we suggested that four general requirements for generating genetic oscillation should be revised to be negative feedback, sufficient nonlinearity, sufficient memory and proper balancing of timescale. The numerical study of the fractional Rössler oscillator implied that the globally slow transcription tends to lower the chance of a coupled or more complex nonlinear genetic oscillatory system behaving chaotically.

  20. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  1. MT71x: Multi-Temperature Library Based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent; Gray, Mark Girard

    The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and betweenmore » deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.« less

  2. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  3. Identifying variably saturated water-flow patterns in a steep hillslope under intermittent heavy rainfall

    USGS Publications Warehouse

    El-Kadi, A. I.; Torikai, J.D.

    2001-01-01

    The objective of this paper is to identify water-flow patterns in part of an active landslide, through the use of numerical simulations and data obtained during a field study. The approaches adopted include measuring rainfall events and pore-pressure responses in both saturated and unsaturated soils at the site. To account for soil variability, the Richards equation is solved within deterministic and stochastic frameworks. The deterministic simulations considered average water-retention data, adjusted retention data to account for stones or cobbles, retention functions for a heterogeneous pore structure, and continuous retention functions for preferential flow. The stochastic simulations applied the Monte Carlo approach which considers statistical distribution and autocorrelation of the saturated conductivity and its cross correlation with the retention function. Although none of the models is capable of accurately predicting field measurements, appreciable improvement in accuracy was attained using stochastic, preferential flow, and heterogeneous pore-structure models. For the current study, continuum-flow models provide reasonable accuracy for practical purposes, although they are expected to be less accurate than multi-domain preferential flow models.

  4. Stochastic Petri Net extension of a yeast cell cycle model.

    PubMed

    Mura, Ivan; Csikász-Nagy, Attila

    2008-10-21

    This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.

  5. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  6. Probabilistic Modeling of the Renal Stone Formation Module

    NASA Technical Reports Server (NTRS)

    Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.

    2013-01-01

    The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.

  7. Oscillatory Regulation of Hes1: Discrete Stochastic Delay Modelling and Simulation

    PubMed Central

    Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai

    2006-01-01

    Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein. PMID:16965175

  8. Thermostatted kinetic equations as models for complex systems in physics and life sciences.

    PubMed

    Bianca, Carlo

    2012-12-01

    Statistical mechanics is a powerful method for understanding equilibrium thermodynamics. An equivalent theoretical framework for nonequilibrium systems has remained elusive. The thermodynamic forces driving the system away from equilibrium introduce energy that must be dissipated if nonequilibrium steady states are to be obtained. Historically, further terms were introduced, collectively called a thermostat, whose original application was to generate constant-temperature equilibrium ensembles. This review surveys kinetic models coupled with time-reversible deterministic thermostats for the modeling of large systems composed both by inert matter particles and living entities. The introduction of deterministic thermostats allows to model the onset of nonequilibrium stationary states that are typical of most real-world complex systems. The first part of the paper is focused on a general presentation of the main physical and mathematical definitions and tools: nonequilibrium phenomena, Gauss least constraint principle and Gaussian thermostats. The second part provides a review of a variety of thermostatted mathematical models in physics and life sciences, including Kac, Boltzmann, Jager-Segel and the thermostatted (continuous and discrete) kinetic for active particles models. Applications refer to semiconductor devices, nanosciences, biological phenomena, vehicular traffic, social and economics systems, crowds and swarms dynamics. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  10. Cognitive Diagnostic Analysis Using Hierarchically Structured Skills

    ERIC Educational Resources Information Center

    Su, Yu-Lan

    2013-01-01

    This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…

  11. Stochastic Dynamic Mixed-Integer Programming (SD-MIP)

    DTIC Science & Technology

    2015-05-05

    stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g

  12. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  13. Roles of factorial noise in inducing bimodal gene expression

    NASA Astrophysics Data System (ADS)

    Liu, Peijiang; Yuan, Zhanjiang; Huang, Lifang; Zhou, Tianshou

    2015-06-01

    Some gene regulatory systems can exhibit bimodal distributions of mRNA or protein although the deterministic counterparts are monostable. This noise-induced bimodality is an interesting phenomenon and has important biological implications, but it is unclear how different sources of expression noise (each source creates so-called factorial noise that is defined as a component of the total noise) contribute separately to this stochastic bimodality. Here we consider a minimal model of gene regulation, which is monostable in the deterministic case. Although simple, this system contains factorial noise of two main kinds: promoter noise due to switching between gene states and transcriptional (or translational) noise due to synthesis and degradation of mRNA (or protein). To better trace the roles of factorial noise in inducing bimodality, we also analyze two limit models, continuous and adiabatic approximations, apart from the exact model. We show that in the case of slow gene switching, the continuous model where only promoter noise is considered can exhibit bimodality; in the case of fast switching, the adiabatic model where only transcriptional or translational noise is considered can also exhibit bimodality but the exact model cannot; and in other cases, both promoter noise and transcriptional or translational noise can cooperatively induce bimodality. Since slow gene switching and large protein copy numbers are characteristics of eukaryotic cells, whereas fast gene switching and small protein copy numbers are characteristics of prokaryotic cells, we infer that eukaryotic stochastic bimodality is induced mainly by promoter noise, whereas prokaryotic stochastic bimodality is induced primarily by transcriptional or translational noise.

  14. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  15. Density waves in granular flow

    NASA Astrophysics Data System (ADS)

    Herrmann, H. J.; Flekkøy, E.; Nagel, K.; Peng, G.; Ristow, G.

    Ample experimental evidence has shown the existence of spontaneous density waves in granular material flowing through pipes or hoppers. Using Molecular Dynamics Simulations we show that several types of waves exist and find that these density fluctuations follow a 1/f spectrum. We compare this behaviour to deterministic one-dimensional traffic models. If positions and velocities are continuous variables the model shows self-organized criticality driven by the slowest car. We also present Lattice Gas and Boltzmann Lattice Models which reproduce the experimentally observed effects. Density waves are spontaneously generated when the viscosity has a nonlinear dependence on density which characterizes granular flow.

  16. Wind shear measuring on board an airliner

    NASA Technical Reports Server (NTRS)

    Krauspe, P.

    1984-01-01

    A measurement technique which continuously determines the wind vector on board an airliner during takeoff and landing is introduced. Its implementation is intended to deliver sufficient statistical background concerning low frequency wind changes in the atmospheric boundary layer and extended knowledge about deterministic wind shear modeling. The wind measurement scheme is described and the adaptation of apparatus onboard an A300 airbus is shown. Preliminary measurements made during level flight demonstrate the validity of the method.

  17. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    PubMed

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  18. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  19. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  20. Quasi-continuous stochastic simulation framework for flood modelling

    NASA Astrophysics Data System (ADS)

    Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas

    2017-04-01

    Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.

  1. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 2 PHARMACOKINETIC MODELING

    EPA Science Inventory

    The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...

  2. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  3. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  4. Disentangling the stochastic behavior of complex time series

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Tabar, M. Reza Rahimi; Peinke, Joachim; Lehnertz, Klaus

    2016-10-01

    Complex systems involving a large number of degrees of freedom, generally exhibit non-stationary dynamics, which can result in either continuous or discontinuous sample paths of the corresponding time series. The latter sample paths may be caused by discontinuous events - or jumps - with some distributed amplitudes, and disentangling effects caused by such jumps from effects caused by normal diffusion processes is a main problem for a detailed understanding of stochastic dynamics of complex systems. Here we introduce a non-parametric method to address this general problem. By means of a stochastic dynamical jump-diffusion modelling, we separate deterministic drift terms from different stochastic behaviors, namely diffusive and jumpy ones, and show that all of the unknown functions and coefficients of this modelling can be derived directly from measured time series. We demonstrate appli- cability of our method to empirical observations by a data-driven inference of the deterministic drift term and of the diffusive and jumpy behavior in brain dynamics from ten epilepsy patients. Particularly these different stochastic behaviors provide extra information that can be regarded valuable for diagnostic purposes.

  5. Front propagation and effect of memory in stochastic desertification models with an absorbing state

    NASA Astrophysics Data System (ADS)

    Herman, Dor; Shnerb, Nadav M.

    2017-08-01

    Desertification in dryland ecosystems is considered to be a major environmental threat that may lead to devastating consequences. The concern increases when the system admits two alternative steady states and the transition is abrupt and irreversible (catastrophic shift). However, recent studies show that the inherent stochasticity of the birth-death process, when superimposed on the presence of an absorbing state, may lead to a continuous (second order) transition even if the deterministic dynamics supports a catastrophic transition. Following these works we present here a numerical study of a one-dimensional stochastic desertification model, where the deterministic predictions are confronted with the observed dynamics. Our results suggest that a stochastic spatial system allows for a propagating front only when its active phase invades the inactive (desert) one. In the extinction phase one observes transient front propagation followed by a global collapse. In the presence of a seed bank the vegetation state is shown to be more robust against demographic stochasticity, but the transition in that case still belongs to the directed percolation equivalence class.

  6. Multi-Strain Deterministic Chaos in Dengue Epidemiology, A Challenge for Computational Mathematics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Kooi, Bob W.; Stollenwerk, Nico

    2009-09-01

    Recently, we have analysed epidemiological models of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities, as has been described for dengue fever, known as antibody dependent enhancement (ADE). These models show a rich variety of dynamics through bifurcations up to deterministic chaos. Including temporary cross-immunity even enlarges the parameter range of such chaotic attractors, and also gives rise to various coexisting attractors, which are difficult to identify by standard numerical bifurcation programs using continuation methods. A combination of techniques, including classical bifurcation plots and Lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures. Especially, Lyapunov spectra, which quantify the predictability horizon in the epidemiological system, are computationally very demanding. We show ways to speed up computations of such Lyapunov spectra by a factor of more than ten by parallelizing previously used sequential C programs. Such fast computations of Lyapunov spectra will be especially of use in future investigations of seasonally forced versions of the present models, as they are needed for data analysis.

  7. Exploration of cellular reaction systems.

    PubMed

    Kirkilionis, Markus

    2010-01-01

    We discuss and review different ways to map cellular components and their temporal interaction with other such components to different non-spatially explicit mathematical models. The essential choices made in the literature are between discrete and continuous state spaces, between rule and event-based state updates and between deterministic and stochastic series of such updates. The temporal modelling of cellular regulatory networks (dynamic network theory) is compared with static network approaches in two first introductory sections on general network modelling. We concentrate next on deterministic rate-based dynamic regulatory networks and their derivation. In the derivation, we include methods from multiscale analysis and also look at structured large particles, here called macromolecular machines. It is clear that mass-action systems and their derivatives, i.e. networks based on enzyme kinetics, play the most dominant role in the literature. The tools to analyse cellular reaction networks are without doubt most complete for mass-action systems. We devote a long section at the end of the review to make a comprehensive review of related tools and mathematical methods. The emphasis is to show how cellular reaction networks can be analysed with the help of different associated graphs and the dissection into modules, i.e. sub-networks.

  8. The effects of demand uncertainty on strategic gaming in the merit-order electricity pool market

    NASA Astrophysics Data System (ADS)

    Frem, Bassam

    In a merit-order electricity pool market, generating companies (Gencos) game with their offered incremental cost to meet the electricity demand and earn bigger market shares and higher profits. However when the demand is treated as a random variable instead of as a known constant, these Genco gaming strategies become more complex. After a brief introduction of electricity markets and gaming, the effects of demand uncertainty on strategic gaming are studied in two parts: (1) Demand modelled as a discrete random variable (2) Demand modelled as a continuous random variable. In the first part, we proposed an algorithm, the discrete stochastic strategy (DSS) algorithm that generates a strategic set of offers from the perspective of the Gencos' profits. The DSS offers were tested and compared to the deterministic Nash equilibrium (NE) offers based on the predicted demand. This comparison, based on the expected Genco profits, showed the DSS to be a better strategy in a probabilistic sense than the deterministic NE. In the second part, we presented three gaming strategies: (1) Deterministic NE (2) No-Risk (3) Risk-Taking. The strategies were then tested and their profit performances were compared using two assessment tools: (a) Expected value and standard deviation (b) Inverse cumulative distribution. We concluded that despite yielding higher profit performance under the right conjectures, Risk-Taking strategies are very sensitive to incorrect conjectures on the competitors' gaming decisions. As such, despite its lower profit performance, the No-Risk strategy was deemed preferable.

  9. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  10. Effect of sample volume on metastable zone width and induction time

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2012-04-01

    The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.

  11. Atomic clocks and the continuous-time random-walk

    NASA Astrophysics Data System (ADS)

    Formichella, Valerio; Camparo, James; Tavella, Patrizia

    2017-11-01

    Atomic clocks play a fundamental role in many fields, most notably they generate Universal Coordinated Time and are at the heart of all global navigation satellite systems. Notwithstanding their excellent timekeeping performance, their output frequency does vary: it can display deterministic frequency drift; diverse continuous noise processes result in nonstationary clock noise (e.g., random-walk frequency noise, modelled as a Wiener process), and the clock frequency may display sudden changes (i.e., "jumps"). Typically, the clock's frequency instability is evaluated by the Allan or Hadamard variances, whose functional forms can identify the different operative noise processes. Here, we show that the Allan and Hadamard variances of a particular continuous-time random-walk, the compound Poisson process, have the same functional form as for a Wiener process with drift. The compound Poisson process, introduced as a model for observed frequency jumps, is an alternative to the Wiener process for modelling random walk frequency noise. This alternate model fits well the behavior of the rubidium clocks flying on GPS Block-IIR satellites. Further, starting from jump statistics, the model can be improved by considering a more general form of continuous-time random-walk, and this could bring new insights into the physics of atomic clocks.

  12. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  13. Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach

    NASA Astrophysics Data System (ADS)

    Berloff, Pavel

    2018-07-01

    This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.

  14. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  15. INCREASING HEAVY OIL RESERVES IN THE WILMINGTON OIL FIELD THROUGH ADVANCED RESERVOIR CHARACTERIZATION AND THERMAL PRODUCTION TECHNOLOGIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott Hara

    2000-02-18

    The project involves using advanced reservoir characterization and thermal production technologies to improve thermal recovery techniques and lower operating and capital costs in a slope and basin clastic (SBC) reservoir in the Wilmington field, Los Angeles Co., CA. Through March 1999, project work has been completed related to data preparation, basic reservoir engineering, developing a deterministic three dimensional (3-D) geologic model, a 3-D deterministic reservoir simulation model, and a rock-log model, well drilling and completions, and surface facilities. Work is continuing on the stochastic geologic model, developing a 3-D stochastic thermal reservoir simulation model of the Fault Block IIA Tarmore » (Tar II-A) Zone, and operational work and research studies to prevent thermal-related formation compaction. Thermal-related formation compaction is a concern of the project team due to observed surface subsidence in the local area above the steamflood project. Last quarter on January 12, the steamflood project lost its inexpensive steam source from the Harbor Cogeneration Plant as a result of the recent deregulation of electrical power rates in California. An operational plan was developed and implemented to mitigate the effects of the two situations. Seven water injection wells were placed in service in November and December 1998 on the flanks of the Phase 1 steamflood area to pressure up the reservoir to fill up the existing steam chest. Intensive reservoir engineering and geomechanics studies are continuing to determine the best ways to shut down the steamflood operations in Fault Block II while minimizing any future surface subsidence. The new 3-D deterministic thermal reservoir simulator model is being used to provide sensitivity cases to optimize production, steam injection, future flank cold water injection and reservoir temperature and pressure. According to the model, reservoir fill up of the steam chest at the current injection rate of 28,000 BPD and gross and net oil production rates of 7,700 BPD and 750 BOPD (injection to production ratio of 4) will occur in October 1999. At that time, the reservoir should act more like a waterflood and production and cold water injection can be operated at lower net injection rates to be determined. Modeling runs developed this quarter found that varying individual well injection rates to meet added production and local pressure problems by sub-zone could reduce steam chest fill-up by up to one month.« less

  16. Aspen succession in the Intermountain West: A deterministic model

    Treesearch

    Dale L. Bartos; Frederick R. Ward; George S. Innis

    1983-01-01

    A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...

  17. A Hybrid Method of Moment Equations and Rate Equations to Modeling Gas-Grain Chemistry

    NASA Astrophysics Data System (ADS)

    Pei, Y.; Herbst, E.

    2011-05-01

    Grain surfaces play a crucial role in catalyzing many important chemical reactions in the interstellar medium (ISM). The deterministic rate equation (RE) method has often been used to simulate the surface chemistry. But this method becomes inaccurate when the number of reacting particles per grain is typically less than one, which can occur in the ISM. In this condition, stochastic approaches such as the master equations are adopted. However, these methods have mostly been constrained to small chemical networks due to the large amounts of processor time and computer power required. In this study, we present a hybrid method consisting of the moment equation approximation to the stochastic master equation approach and deterministic rate equations to treat a gas-grain model of homogeneous cold cloud cores with time-independent physical conditions. In this model, we use the standard OSU gas phase network (version OSU2006V3) which involves 458 gas phase species and more than 4000 reactions, and treat it by deterministic rate equations. A medium-sized surface reaction network which consists of 21 species and 19 reactions accounts for the productions of stable molecules such as H_2O, CO, CO_2, H_2CO, CH_3OH, NH_3 and CH_4. These surface reactions are treated by a hybrid method of moment equations (Barzel & Biham 2007) and rate equations: when the abundance of a surface species is lower than a specific threshold, say one per grain, we use the ``stochastic" moment equations to simulate the evolution; when its abundance goes above this threshold, we use the rate equations. A continuity technique is utilized to secure a smooth transition between these two methods. We have run chemical simulations for a time up to 10^8 yr at three temperatures: 10 K, 15 K, and 20 K. The results will be compared with those generated from (1) a completely deterministic model that uses rate equations for both gas phase and grain surface chemistry, (2) the method of modified rate equations (Garrod 2008), which partially takes into account the stochastic effect for surface reactions, and (3) the master equation approach solved using a Monte Carlo technique. At 10 K and standard grain sizes, our model results agree well with the above three methods, while discrepancies appear at higher temperatures and smaller grain sizes.

  18. Multistability and hidden attractors in an impulsive Goodwin oscillator with time delay

    NASA Astrophysics Data System (ADS)

    Zhusubaliyev, Z. T.; Mosekilde, E.; Churilov, A. N.; Medvedev, A.

    2015-07-01

    The release of luteinizing hormone (LH) is driven by intermittent bursts of activity in the hypothalamic nerve centers of the brain. Luteinizing hormone again stimulates release of the male sex hormone testosterone (Te) and, via the circulating concentration of Te, the hypothalamic nerve centers are subject to a negative feedback regulation that is capable of modifying the intermittent bursts into more regular pulse trains. Bifurcation analysis of a hybrid model that attempts to integrate the intermittent bursting activity with a continuous hormone secretion has recently demonstrated a number of interesting nonlinear dynamic phenomena, including bistability and deterministic chaos. The present paper focuses on the additional complexity that arises when the time delay in the continuous part of the model exceeds the typical bursting interval of the feedback. Under these conditions, the hybrid model is capable of displaying quasiperiodicity and border collisions as well as multistability and hidden attractors.

  19. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    NASA Astrophysics Data System (ADS)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  20. Precipitation-runoff modeling system; user's manual

    USGS Publications Warehouse

    Leavesley, G.H.; Lichty, R.W.; Troutman, B.M.; Saindon, L.G.

    1983-01-01

    The concepts, structure, theoretical development, and data requirements of the precipitation-runoff modeling system (PRMS) are described. The precipitation-runoff modeling system is a modular-design, deterministic, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow, sediment yields, and general basin hydrology. Basin response to normal and extreme rainfall and snowmelt can be simulated to evaluate changes in water balance relationships, flow regimes, flood peaks and volumes, soil-water relationships, sediment yields, and groundwater recharge. Parameter-optimization and sensitivity analysis capabilites are provided to fit selected model parameters and evaluate their individual and joint effects on model output. The modular design provides a flexible framework for continued model system enhancement and hydrologic modeling research and development. (Author 's abstract)

  1. Estimating the probability of an extinction or major outbreak for an environmentally transmitted infectious disease.

    PubMed

    Lahodny, G E; Gautam, R; Ivanek, R

    2015-01-01

    Indirect transmission through the environment, pathogen shedding by infectious hosts, replication of free-living pathogens within the environment, and environmental decontamination are suspected to play important roles in the spread and control of environmentally transmitted infectious diseases. To account for these factors, the classic Susceptible-Infectious-Recovered-Susceptible epidemic model is modified to include a compartment representing the amount of free-living pathogen within the environment. The model accounts for host demography, direct and indirect transmission, replication of free-living pathogens in the environment, and removal of free-living pathogens by natural death or environmental decontamination. Based on the assumptions of the deterministic model, a continuous-time Markov chain model is developed. An estimate for the probability of disease extinction or a major outbreak is obtained by approximating the Markov chain with a multitype branching process. Numerical simulations illustrate important differences between the deterministic and stochastic counterparts, relevant for outbreak prevention, that depend on indirect transmission, pathogen shedding by infectious hosts, replication of free-living pathogens, and environmental decontamination. The probability of a major outbreak is computed for salmonellosis in a herd of dairy cattle as well as cholera in a human population. An explicit expression for the probability of disease extinction or a major outbreak in terms of the model parameters is obtained for systems with no direct transmission or replication of free-living pathogens.

  2. Tularosa Basin Play Fairway Analysis Data and Models

    DOE Data Explorer

    Nash, Greg

    2017-07-11

    This submission includes raster datasets for each layer of evidence used for weights of evidence analysis as well as the deterministic play fairway analysis (PFA). Data representative of heat, permeability and groundwater comprises some of the raster datasets. Additionally, the final deterministic PFA model is provided along with a certainty model. All of these datasets are best used with an ArcGIS software package, specifically Spatial Data Modeler.

  3. Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.

  4. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    PubMed

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  5. Comparison of the effects of enteral feeding with continuous and intermittent parenteral nutrition on hepatic triglyceride secretion in human beings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isabel-Martinez, L.; Skinner, C.; Parkin, A.

    Plasma triglyceride turnover was measured during steady-state conditions in 22 postoperative patients. Nine had received nutritional support with an enteral regimen, seven had received an equivalent regimen as continuous parenteral nutrition, and six received the same parenteral regimen as a cyclical infusion. After 5 days of nutritional support, each patient received an intravenous bolus of tritiated glycerol. Plasma radiolabeled triglyceride content was measured during the subsequent 24 hours. The data were analyzed by means of a simple deterministic model of plasma triglyceride kinetics and compared with the results obtained by stochastic analysis. The rates of hepatic triglyceride secretion obtained bymore » deterministic analysis were higher than those obtained by the stochastic approach. However, the mode of delivery of the nutritional regimen did not affect the rate of hepatic triglyceride secretion regardless of the method of analysis. The results suggest that neither complete nutritional bypass of the gastrointestinal tract nor interruption of parenteral nutrition in an attempt to mimic normal eating has any effect on hepatic triglyceride secretion. Any beneficial effect that enteral feeding or cyclical parenteral nutrition may have on liver dysfunction associated with standard parenteral nutrition appears to be unrelated to changes in hepatic triglyceride secretion.« less

  6. Combining Deterministic structures and stochastic heterogeneity for transport modeling

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg

    2017-04-01

    Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.

  7. Modeling the within-host dynamics of cholera: bacterial-viral interaction.

    PubMed

    Wang, Xueying; Wang, Jin

    2017-08-01

    Novel deterministic and stochastic models are proposed in this paper for the within-host dynamics of cholera, with a focus on the bacterial-viral interaction. The deterministic model is a system of differential equations describing the interaction among the two types of vibrios and the viruses. The stochastic model is a system of Markov jump processes that is derived based on the dynamics of the deterministic model. The multitype branching process approximation is applied to estimate the extinction probability of bacteria and viruses within a human host during the early stage of the bacterial-viral infection. Accordingly, a closed-form expression is derived for the disease extinction probability, and analytic estimates are validated with numerical simulations. The local and global dynamics of the bacterial-viral interaction are analysed using the deterministic model, and the result indicates that there is a sharp disease threshold characterized by the basic reproduction number [Formula: see text]: if [Formula: see text], vibrios ingested from the environment into human body will not cause cholera infection; if [Formula: see text], vibrios will grow with increased toxicity and persist within the host, leading to human cholera. In contrast, the stochastic model indicates, more realistically, that there is always a positive probability of disease extinction within the human host.

  8. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    PubMed

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  9. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.

  10. Stochastic dynamics and non-equilibrium thermodynamics of a bistable chemical system: the Schlögl model revisited.

    PubMed

    Vellela, Melissa; Qian, Hong

    2009-10-06

    Schlögl's model is the canonical example of a chemical reaction system that exhibits bistability. Because the biological examples of bistability and switching behaviour are increasingly numerous, this paper presents an integrated deterministic, stochastic and thermodynamic analysis of the model. After a brief review of the deterministic and stochastic modelling frameworks, the concepts of chemical and mathematical detailed balances are discussed and non-equilibrium conditions are shown to be necessary for bistability. Thermodynamic quantities such as the flux, chemical potential and entropy production rate are defined and compared across the two models. In the bistable region, the stochastic model exhibits an exchange of the global stability between the two stable states under changes in the pump parameters and volume size. The stochastic entropy production rate shows a sharp transition that mirrors this exchange. A new hybrid model that includes continuous diffusion and discrete jumps is suggested to deal with the multiscale dynamics of the bistable system. Accurate approximations of the exponentially small eigenvalue associated with the time scale of this switching and the full time-dependent solution are calculated using Matlab. A breakdown of previously known asymptotic approximations on small volume scales is observed through comparison with these and Monte Carlo results. Finally, in the appendix section is an illustration of how the diffusion approximation of the chemical master equation can fail to represent correctly the mesoscopically interesting steady-state behaviour of the system.

  11. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  12. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  13. Research on Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Lobeck, William E.

    2002-01-01

    Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.

  14. Research on Intelligent Synthesis Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.; Loftin, R. Bowen

    2002-12-01

    Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.

  15. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  16. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  17. The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)

    PubMed Central

    Smith, Philip L.; Ratcliff, Roger; McKoon, Gail

    2015-01-01

    Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314

  18. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  19. Transfer of non-Gaussian quantum states of mechanical oscillator to light

    NASA Astrophysics Data System (ADS)

    Filip, Radim; Rakhubovsky, Andrey A.

    2015-11-01

    Non-Gaussian quantum states are key resources for quantum optics with continuous-variable oscillators. The non-Gaussian states can be deterministically prepared by a continuous evolution of the mechanical oscillator isolated in a nonlinear potential. We propose feasible and deterministic transfer of non-Gaussian quantum states of mechanical oscillators to a traveling light beam, using purely all-optical methods. The method relies on only basic feasible and high-quality elements of quantum optics: squeezed states of light, linear optics, homodyne detection, and electro-optical feedforward control of light. By this method, a wide range of novel non-Gaussian states of light can be produced in the future from the mechanical states of levitating particles in optical tweezers, including states necessary for the implementation of an important cubic phase gate.

  20. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  1. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  2. Global behavior analysis for stochastic system of 1,3-PD continuous fermentation

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Kliemann, Wolfgang; Li, Chunfa; Feng, Enmin; Xiu, Zhilong

    2017-12-01

    Global behavior for stochastic system of continuous fermentation in glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae is analyzed in this paper. This bioprocess cannot avoid the stochastic perturbation caused by internal and external disturbance which reflect on the growth rate. These negative factors can limit and degrade the achievable performance of controlled systems. Based on multiplicity phenomena, the equilibriums and bifurcations of the deterministic system are analyzed. Then, a stochastic model is presented by a bounded Markov diffusion process. In order to analyze the global behavior, we compute the control sets for the associated control system. The probability distributions of relative supports are also computed. The simulation results indicate that how the disturbed biosystem tend to stationary behavior globally.

  3. Impacts of Considering Climate Variability on Investment Decisions in Ethiopia

    NASA Astrophysics Data System (ADS)

    Strzepek, K.; Block, P.; Rosegrant, M.; Diao, X.

    2005-12-01

    In Ethiopia, climate extremes, inducing droughts or floods, are not unusual. Monitoring the effects of these extremes, and climate variability in general, is critical for economic prediction and assessment of the country's future welfare. The focus of this study involves adding climate variability to a deterministic, mean climate-driven agro-economic model, in an attempt to understand its effects and degree of influence on general economic prediction indicators for Ethiopia. Four simulations are examined, including a baseline simulation and three investment strategies: simulations of irrigation investment, roads investment, and a combination investment of both irrigation and roads. The deterministic model is transformed into a stochastic model by dynamically adding year-to-year climate variability through climate-yield factors. Nine sets of actual, historic, variable climate data are individually assembled and implemented into the 12-year stochastic model simulation, producing an ensemble of economic prediction indicators. This ensemble allows for a probabilistic approach to planning and policy making, allowing decision makers to consider risk. The economic indicators from the deterministic and stochastic approaches, including rates of return to investments, are significantly different. The predictions of the deterministic model appreciably overestimate the future welfare of Ethiopia; the predictions of the stochastic model, utilizing actual climate data, tend to give a better semblance of what may be expected. Inclusion of climate variability is vital for proper analysis of the predictor values from this agro-economic model.

  4. Subspace algorithms for identifying separable-in-denominator 2D systems with deterministic-stochastic inputs

    NASA Astrophysics Data System (ADS)

    Ramos, José A.; Mercère, Guillaume

    2016-12-01

    In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.

  5. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  6. Comparison of different assimilation schemes in an operational assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2016-04-01

    In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.

  7. Understanding agent-based models of financial markets: A bottom-up approach based on order parameters and phase diagrams

    NASA Astrophysics Data System (ADS)

    Lye, Ribin; Tan, James Peng Lung; Cheong, Siew Ann

    2012-11-01

    We describe a bottom-up framework, based on the identification of appropriate order parameters and determination of phase diagrams, for understanding progressively refined agent-based models and simulations of financial markets. We illustrate this framework by starting with a deterministic toy model, whereby N independent traders buy and sell M stocks through an order book that acts as a clearing house. The price of a stock increases whenever it is bought and decreases whenever it is sold. Price changes are updated by the order book before the next transaction takes place. In this deterministic model, all traders based their buy decisions on a call utility function, and all their sell decisions on a put utility function. We then make the agent-based model more realistic, by either having a fraction fb of traders buy a random stock on offer, or a fraction fs of traders sell a random stock in their portfolio. Based on our simulations, we find that it is possible to identify useful order parameters from the steady-state price distributions of all three models. Using these order parameters as a guide, we find three phases: (i) the dead market; (ii) the boom market; and (iii) the jammed market in the phase diagram of the deterministic model. Comparing the phase diagrams of the stochastic models against that of the deterministic model, we realize that the primary effect of stochasticity is to eliminate the dead market phase.

  8. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  9. On generic obstructions to recovering correct statistics from climate simulations: Homogenization for deterministic maps and multiplicative noise

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg; Melbourne, Ian

    2013-04-01

    Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.

  10. The threshold of a stochastic delayed SIR epidemic model with temporary immunity

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Chen, Qingmei; Jiang, Daqing

    2016-05-01

    This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  11. Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model

    NASA Astrophysics Data System (ADS)

    Markowich, Peter A.; Titi, Edriss S.; Trabelsi, Saber

    2016-04-01

    In this paper we introduce and analyze an algorithm for continuous data assimilation for a three-dimensional Brinkman-Forchheimer-extended Darcy (3D BFeD) model of porous media. This model is believed to be accurate when the flow velocity is too large for Darcy’s law to be valid, and additionally the porosity is not too small. The algorithm is inspired by ideas developed for designing finite-parameters feedback control for dissipative systems. It aims to obtain improved estimates of the state of the physical system by incorporating deterministic or noisy measurements and observations. Specifically, the algorithm involves a feedback control that nudges the large scales of the approximate solution toward those of the reference solution associated with the spatial measurements. In the first part of the paper, we present a few results of existence and uniqueness of weak and strong solutions of the 3D BFeD system. The second part is devoted to the convergence analysis of the data assimilation algorithm.

  12. A queuing model for road traffic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.

    We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.

  13. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  14. Arctic Sea Ice: Trends, Stability and Variability

    NASA Astrophysics Data System (ADS)

    Moon, Woosok

    A stochastic Arctic sea-ice model is derived and analyzed in detail to interpret the recent decay and associated variability of Arctic sea-ice under changes in greenhouse gas forcing widely referred to as global warming. The approach begins from a deterministic model of the heat flux balance through the air/sea/ice system, which uses observed monthly-averaged heat fluxes to drive a time evolution of sea-ice thickness. This model reproduces the observed seasonal cycle of the ice cover and it is to this that stochastic noise---representing high frequency variability---is introduced. The model takes the form of a single periodic non-autonomous stochastic ordinary differential equation. Following an introductory chapter, the two that follow focus principally on the properties of the deterministic model in order to identify the main properties governing the stability of the ice cover. In chapter 2 the underlying time-dependent solutions to the deterministic model are analyzed for their stability. It is found that the response time-scale of the system to perturbations is dominated by the destabilizing sea-ice albedo feedback, which is operative in the summer, and the stabilizing long wave radiative cooling of the ice surface, which is operative in the winter. This basic competition is found throughout the thesis to define the governing dynamics of the system. In particular, as greenhouse gas forcing increases, the sea-ice albedo feedback becomes more effective at destabilizing the system. Thus, any projections of the future state of Arctic sea-ice will depend sensitively on the treatment of the ice-albedo feedback. This in turn implies that the treatment a fractional ice cover as the ice areal extent changes rapidly, must be handled with the utmost care. In chapter 3, the idea of a two-season model, with just winter and summer, is revisited. By breaking the seasonal cycle up in this manner one can simplify the interpretation of the basic dynamics. Whereas in the fully time-dependent seasonal model one finds stable seasonal ice cover (vanishing in the summer but reappearing in the winter), in previous two-season models such a state could not be found. In this chapter the sufficient conditions are found for a stable seasonal ice cover, which reside in including a time variation in the shortwave radiance during summer. This provides a qualitative interpretation of the continuous and reversible shift from perennial to seasonally-varying states in the more complex deterministic model. In order to put the stochastic model into a realistic observational framework, in chapter 4, the analysis of daily satellite retrievals of ice albedo and ice extent is described. Both the basic statistics are examined and a new method, called multi-fractal temporally weighted detrended fluctuation analysis, is applied. Because the basic data are taken on daily time scales, the full fidelity of the retrieved data is accessed and we find time scales from days and weeks to seasonal and decadal. Importantly, the data show a white-noise structure on annual to biannual time scales and this provides the basis for using a Wiener process for the noise in the stochastic Arctic sea-ice model. In chapter 5 a generalized perturbation analysis of a non-autonomous stochastic differential equation is developed and then applied to interpreting the variability of Arctic sea-ice as greenhouse gas forcing increases. The resulting analytic expressions of the statistical moments provide insight into the transient and memory-delay effects associated with the basic competition in the system: the ice-albedo feedback and long wave radiative stabilization along with the asymmetry in the nonlinearity of the deterministic contributions to the model and the magnitude and structure of the stochastic noise. A systematic study of the impact of the noise structure, from additive to multiplicative, is undertaken in chapters 6 and 7. Finally, in chapter 8 the matter of including a fractional ice cover into a deterministic model is addressed. It is found that a simple but crucial mistake is made in one of the most widely used model schemes and this has a major impact given the important role of areal fraction in the ice-albedo feedback in such a model. The thesis is summarized in chapter 9.

  15. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  16. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  17. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  18. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation

    PubMed Central

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487

  19. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation.

    PubMed

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.

  20. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).

  1. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  2. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  3. Diffusion tensor tractography of the arcuate fasciculus in patients with brain tumors: Comparison between deterministic and probabilistic models

    PubMed Central

    Li, Zhixi; Peck, Kyung K.; Brennan, Nicole P.; Jenabi, Mehrnaz; Hsu, Meier; Zhang, Zhigang; Holodny, Andrei I.; Young, Robert J.

    2014-01-01

    Purpose The purpose of this study was to compare the deterministic and probabilistic tracking methods of diffusion tensor white matter fiber tractography in patients with brain tumors. Materials and Methods We identified 29 patients with left brain tumors <2 cm from the arcuate fasciculus who underwent pre-operative language fMRI and DTI. The arcuate fasciculus was reconstructed using a deterministic Fiber Assignment by Continuous Tracking (FACT) algorithm and a probabilistic method based on an extended Monte Carlo Random Walk algorithm. Tracking was controlled using two ROIs corresponding to Broca’s and Wernicke’s areas. Tracts in tumoraffected hemispheres were examined for extension between Broca’s and Wernicke’s areas, anterior-posterior length and volume, and compared with the normal contralateral tracts. Results Probabilistic tracts displayed more complete anterior extension to Broca’s area than did FACT tracts on the tumor-affected and normal sides (p < 0.0001). The median length ratio for tumor: normal sides was greater for probabilistic tracts than FACT tracts (p < 0.0001). The median tract volume ratio for tumor: normal sides was also greater for probabilistic tracts than FACT tracts (p = 0.01). Conclusion Probabilistic tractography reconstructs the arcuate fasciculus more completely and performs better through areas of tumor and/or edema. The FACT algorithm tends to underestimate the anterior-most fibers of the arcuate fasciculus, which are crossed by primary motor fibers. PMID:25328583

  4. Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.

    PubMed

    Gomez, Christophe; Hartung, Niklas

    2018-01-01

    Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.

  5. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models

    DTIC Science & Technology

    2002-03-01

    such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most

  6. Dynamic analysis of a stochastic rumor propagation model

    NASA Astrophysics Data System (ADS)

    Jia, Fangju; Lv, Guangying

    2018-01-01

    The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. In this paper, we are concerned with a stochastic rumor propagation model. Sufficient conditions for extinction and persistence in the mean of the rumor are established. The threshold between persistence in the mean and extinction of the rumor is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  7. Scaling theory for the quasideterministic limit of continuous bifurcations.

    PubMed

    Kessler, David A; Shnerb, Nadav M

    2012-05-01

    Deterministic rate equations are widely used in the study of stochastic, interacting particles systems. This approach assumes that the inherent noise, associated with the discreteness of the elementary constituents, may be neglected when the number of particles N is large. Accordingly, it fails close to the extinction transition, when the amplitude of stochastic fluctuations is comparable with the size of the population. Here we present a general scaling theory of the transition regime for spatially extended systems. We demonstrate this through a detailed study of two fundamental models for out-of-equilibrium phase transitions: the Susceptible-Infected-Susceptible (SIS) that belongs to the directed percolation equivalence class and the Susceptible-Infected-Recovered (SIR) model belonging to the dynamic percolation class. Implementing the Ginzburg criteria we show that the width of the fluctuation-dominated region scales like N^{-κ}, where N is the number of individuals per site and κ=2/(d_{u}-d), d_{u} is the upper critical dimension. Other exponents that control the approach to the deterministic limit are shown to be calculable once κ is known. The theory is extended to include the corrections to the front velocity above the transition. It is supported by the results of extensive numerical simulations for systems of various dimensionalities.

  8. Integrating urban recharge uncertainty into standard groundwater modeling practice: A case study on water main break predictions for the Barton Springs segment of the Edwards Aquifer, Austin, Texas

    NASA Astrophysics Data System (ADS)

    Sinner, K.; Teasley, R. L.

    2016-12-01

    Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling

  9. Bayesian deterministic decision making: a normative account of the operant matching law and heavy-tailed reward history dependency of choices.

    PubMed

    Saito, Hiroshi; Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato

    2014-01-01

    The decision making behaviors of humans and animals adapt and then satisfy an "operant matching law" in certain type of tasks. This was first pointed out by Herrnstein in his foraging experiments on pigeons. The matching law has been one landmark for elucidating the underlying processes of decision making and its learning in the brain. An interesting question is whether decisions are made deterministically or probabilistically. Conventional learning models of the matching law are based on the latter idea; they assume that subjects learn choice probabilities of respective alternatives and decide stochastically with the probabilities. However, it is unknown whether the matching law can be accounted for by a deterministic strategy or not. To answer this question, we propose several deterministic Bayesian decision making models that have certain incorrect beliefs about an environment. We claim that a simple model produces behavior satisfying the matching law in static settings of a foraging task but not in dynamic settings. We found that the model that has a belief that the environment is volatile works well in the dynamic foraging task and exhibits undermatching, which is a slight deviation from the matching law observed in many experiments. This model also demonstrates the double-exponential reward history dependency of a choice and a heavier-tailed run-length distribution, as has recently been reported in experiments on monkeys.

  10. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  11. The structure of evaporating and combusting sprays: Measurements and predictions

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.; Solomon, A. S. P.; Faeth, F. M.

    1983-01-01

    The structure of particle-laden jets and nonevaporating and evaporating sprays was measured in order to evaluate models of these processes. Three models are being evaluated: (1) a locally homogeneous flow model, where slip between the phases is neglected and the flow is assumed to be in local thermodynamic equilibrium; (2) a deterministic separated flow model, where slip and finite interphase transport rates are considered but effects of particle/drop dispersion by turbulence and effects of turbulence on interphase transport rates are ignored; and (3) a stochastic separated flow model, where effects of interphase slip, turbulent dispersion and turbulent fluctuations are considered using random sampling for turbulence properties in conjunction with random-walk computations for particle motion. All three models use a k-e-g turbulence model. All testing and data reduction are completed for the particle laden jets. Mean and fluctuating velocities of the continuous phase and mean mixture fraction were measured in the evaporating sprays.

  12. Role of demographic stochasticity in a speciation model with sexual reproduction

    NASA Astrophysics Data System (ADS)

    Lafuerza, Luis F.; McKane, Alan J.

    2016-03-01

    Recent theoretical studies have shown that demographic stochasticity can greatly increase the tendency of asexually reproducing phenotypically diverse organisms to spontaneously evolve into localized clusters, suggesting a simple mechanism for sympatric speciation. Here we study the role of demographic stochasticity in a model of competing organisms subject to assortative mating. We find that in models with sexual reproduction, noise can also lead to the formation of phenotypic clusters in parameter ranges where deterministic models would lead to a homogeneous distribution. In some cases, noise can have a sizable effect, rendering the deterministic modeling insufficient to understand the phenotypic distribution.

  13. Equilibrium reconstruction in an iron core tokamak using a deterministic magnetisation model

    NASA Astrophysics Data System (ADS)

    Appel, L. C.; Lupelli, I.; JET Contributors

    2018-02-01

    In many tokamaks ferromagnetic material, usually referred to as an iron-core, is present in order to improve the magnetic coupling between the solenoid and the plasma. The presence of the iron core in proximity to the plasma changes the magnetic topology with consequent effects on the magnetic field structure and the plasma boundary. This paper considers the problem of obtaining the free-boundary plasma equilibrium solution in the presence of ferromagnetic material based on measured constraints. The current approach employs a model described by O'Brien et al. (1992) in which the magnetisation currents at the iron-air boundary are represented by a set of free parameters and appropriate boundary conditions are enforced via a set of quasi-measurements on the material boundary. This can lead to the possibility of overfitting the data and hiding underlying issues with the measured signals. Although the model typically achieves good fits to measured magnetic signals there are significant discrepancies in the inferred magnetic topology compared with other plasma diagnostic measurements that are independent of the magnetic field. An alternative approach for equilibrium reconstruction in iron-core tokamaks, termed the deterministic magnetisation model is developed and implemented in EFIT++. The iron is represented by a boundary current with the gradients in the magnetisation dipole state generating macroscopic internal magnetisation currents. A model for the boundary magnetisation currents at the iron-air interface is developed using B-Splines enabling continuity to arbitrary order; internal magnetisation currents are allocated to triangulated regions within the iron, and a method to enable adaptive refinement is implemented. The deterministic model has been validated by comparing it with a synthetic 2-D electromagnetic model of JET. It is established that the maximum field discrepancy is less than 1.5 mT throughout the vacuum region enclosing the plasma. The discrepancies of simulated magnetic probe signals are accurate to within 1% for signals with absolute magnitude greater than 100mT; in all other cases agreement is to within 1mT. The effect of neglecting the internal magnetisation currents increases the maximum discrepancy in the vacuum region to >20mT, resulting in errors of 5%-10% in the simulated probe signals. The fact that the previous model neglects the internal magnetisation currents (and also has additional free parameters when fitting the measured data) makes it unsuitable for analysing data in the absence of plasma current. The discrepancy of the poloidal magnetic flux within the vacuum vessel is to within 0.1Wb. Finally the deterministic model is applied to an equilibrium force-balance solution of a JET discharge using experimental data. It is shown that the discrepancies of the outboard separatrix position, and the outer strike-point position inferred from Thomson Scattering and Infrared camera data are much improved beyond the routine equilibrium reconstruction, whereas the discrepancy of the inner strike-point position is similar.

  14. Robust planning of dynamic wireless charging infrastructure for battery electric buses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhaocai; Song, Ziqi

    Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less

  15. Robust planning of dynamic wireless charging infrastructure for battery electric buses

    DOE PAGES

    Liu, Zhaocai; Song, Ziqi

    2017-10-01

    Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less

  16. Multi-parametric variational data assimilation for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  17. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  18. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  19. Catching a quantum jump in mid-flight

    NASA Astrophysics Data System (ADS)

    Minev, Z. K.; Mundhada, S. O.; Zalys-Geller, E.; Shankar, S.; Rheinhold, P.; Frunzio, L.; Schoelkopf, R. J.; Mirrahimi, M.; Devoret, M. H.

    Quantum jumps provide a fundamental manifestation of the interplay between coherent dynamics and strong continuous measurements. Interestingly, the modern theoretical vantage point of quantum trajectories (Carmichael, 1993) suggests that the jump is not instantaneous, but rather smooth, coherent, and under the right conditions may present a deterministic character. We revisit the original observation of quantum jumps in a V-type, three-level atom (Berquist, 1986; Sauter, 1986), in order to ``deterministically'' catch the jump in mid-flight. We have designed and operated a V-type superconducting artificial atom with the 3 needed levels: G (for Ground), B (for Bright), and D (for Dark). The atom is coupled to a continuously monitored microwave mode that can distinguish B from the manifold formed by G and D, but without distinguishing G from D. We will present preliminary results showing how this experiment can be realized. Work supported by: ARO, ONR, AFOSR and YINQE. Discussions with H. Carmichael are gratefully acknowledged.

  20. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  1. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  2. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.

    PubMed

    Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio

    2010-03-26

    Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  3. A deterministic model predicts the properties of stochastic calcium oscillations in airway smooth muscle cells.

    PubMed

    Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James

    2014-08-01

    The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.

  4. Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.

    PubMed

    Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H

    2006-11-15

    The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.

  5. Observations, theoretical ideas and modeling of turbulent flows: Past, present and future

    NASA Technical Reports Server (NTRS)

    Chapman, G. T.; Tobak, M.

    1985-01-01

    Turbulence was analyzed in a historical context featuring the interactions between observations, theoretical ideas, and modeling within three successive movements. These are identified as predominantly statistical, structural and deterministic. The statistical movement is criticized for its failure to deal with the structural elements observed in turbulent flows. The structural movement is criticized for its failure to embody observed structural elements within a formal theory. The deterministic movement is described as having the potential of overcoming these deficiencies by allowing structural elements to exhibit chaotic behavior that is nevertheless embodied within a theory. Four major ideas of this movement are described: bifurcation theory, strange attractors, fractals, and the renormalization group. A framework for the future study of turbulent flows is proposed, based on the premises of the deterministic movement.

  6. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastien; Stollenwerk, Nico

    2010-09-01

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  7. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguiar, Maira; Ballesteros, Sebastien; Stollenwerk, Nico

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  8. The threshold of a stochastic delayed SIR epidemic model with vaccination

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Jiang, Daqing

    2016-11-01

    In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.

  9. Predicting coexistence of plants subject to a tolerance-competition trade-off.

    PubMed

    Haegeman, Bart; Sari, Tewfik; Etienne, Rampal S

    2014-06-01

    Ecological trade-offs between species are often invoked to explain species coexistence in ecological communities. However, few mathematical models have been proposed for which coexistence conditions can be characterized explicitly in terms of a trade-off. Here we present a model of a plant community which allows such a characterization. In the model plant species compete for sites where each site has a fixed stress condition. Species differ both in stress tolerance and competitive ability. Stress tolerance is quantified as the fraction of sites with stress conditions low enough to allow establishment. Competitive ability is quantified as the propensity to win the competition for empty sites. We derive the deterministic, discrete-time dynamical system for the species abundances. We prove the conditions under which plant species can coexist in a stable equilibrium. We show that the coexistence conditions can be characterized graphically, clearly illustrating the trade-off between stress tolerance and competitive ability. We compare our model with a recently proposed, continuous-time dynamical system for a tolerance-fecundity trade-off in plant communities, and we show that this model is a special case of the continuous-time version of our model.

  10. On the deterministic and stochastic use of hydrologic models

    USGS Publications Warehouse

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  11. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  12. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  13. Steps toward validity in active living research: research design that limits accusations of physical determinism.

    PubMed

    Riggs, William

    2014-03-01

    "Active living research" has been accused of being overly "physically deterministic" and this article argues that urban planners must continue to evolve research and address biases in this area. The article first provides background on how researchers have dealt with the relationship between the built environment and health over years. This leads to a presentation of how active living research might be described as overly deterministic. The article then offers lessons for researchers planning to embark in active-living studies as to how they might increase validity and minimize criticism of physical determinism. © 2013 Published by Elsevier Ltd.

  14. Deterministic nonlinear phase gates induced by a single qubit

    NASA Astrophysics Data System (ADS)

    Park, Kimin; Marek, Petr; Filip, Radim

    2018-05-01

    We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.

  15. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  16. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  17. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  18. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.

    PubMed

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2017-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.

  19. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments

    PubMed Central

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2018-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801

  20. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  1. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  2. Optimal Vaccination in a Stochastic Epidemic Model of Two Non-Interacting Populations

    DTIC Science & Technology

    2015-02-17

    of diminishing returns from vacci- nation will generally take place at smaller vaccine allocations V compared to the deterministic model. Optimal...take place and small r0 values where it does not is illustrat- ed in Fig. 4C. As r0 is decreased, the region between the two instances of switching...approximately distribute vaccine in proportion to population size. For large r0 (r0 ≳ 2.9), two switches take place . In the deterministic optimal solution, a

  3. Magnetorheological Finishing for Imprinting Continuous Phase Plate Structure onto Optical Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menapace, J A; Dixit, S N; Genin, F Y

    2004-01-05

    Magnetorheological finishing (MRF) techniques have been developed to manufacture continuous phase plates (CPP's) and custom phase corrective structures on polished fused silica surfaces. These phase structures are important for laser applications requiring precise manipulation and control of beam-shape, energy distribution, and wavefront profile. The MRF's unique deterministic-sub-aperture polishing characteristics make it possible to imprint complex topographical information onto optical surfaces at spatial scale-lengths approaching 1 mm. In this study, we present the results of experiments and model calculations that explore imprinting two-dimensional sinusoidal structures. Results show how the MRF removal function impacts and limits imprint fidelity and what must bemore » done to arrive at a high quality surface. We also present several examples of this imprinting technology for fabrication of phase correction plates and CPPs for use at high fluences.« less

  4. From Weakly Chaotic Dynamics to Deterministic Subdiffusion via Copula Modeling

    NASA Astrophysics Data System (ADS)

    Nazé, Pierre

    2018-03-01

    Copula modeling consists in finding a probabilistic distribution, called copula, whereby its coupling with the marginal distributions of a set of random variables produces their joint distribution. The present work aims to use this technique to connect the statistical distributions of weakly chaotic dynamics and deterministic subdiffusion. More precisely, we decompose the jumps distribution of Geisel-Thomae map into a bivariate one and determine the marginal and copula distributions respectively by infinite ergodic theory and statistical inference techniques. We verify therefore that the characteristic tail distribution of subdiffusion is an extreme value copula coupling Mittag-Leffler distributions. We also present a method to calculate the exact copula and joint distributions in the case where weakly chaotic dynamics and deterministic subdiffusion statistical distributions are already known. Numerical simulations and consistency with the dynamical aspects of the map support our results.

  5. A theoretical and experimental study of turbulent evaporating sprays

    NASA Technical Reports Server (NTRS)

    Solomon, A. S. P.; Shuen, J. S.; Zhang, Q. F.; Faeth, G. M.

    1984-01-01

    Measurements and analysis limited to the dilute portions of turbulent evaporating sprays, injected into a still air environment were completed. Mean and fluctuating velocities and Reynolds stress were measured in the continuous phase. Liquid phase measurements included liquid mass fluxes, drop sizes and drop size and velocity correlation. Initial conditions needed for model evaluation were measured at a location as close to the injector exit as possible. The test sprays showed significant effects of slip and turbulent dispersion of the discrete phase. The measurements were used to evaluate three typical models of these processes: (1) a locally homogeneous flow (LHF) model, where slip between the phases were neglected; (2) a deterministic separated flow (DSF) model, where slip was considered but effects of drop dispersion by turbulence were ignored; and (3) a stochastic separated flow (SSF) model, where effects of interphase slip and turbulent dispersion were considered using random-walk computations for drop motion. For all three models, a k-epsilon model as used to find the properties of the continuous phase. The LHF and DSF models did not provide very satisfactory predictions for the present measurements. In contrast, the SSF model performed reasonably well--with no modifications in the prescription of eddy properties from its original calibration.

  6. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: state of the science and needs for research and implementation.

    PubMed

    Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei

    2007-10-01

    Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.

  7. Modeling disease transmission near eradication: An equation free approach

    NASA Astrophysics Data System (ADS)

    Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan

    2015-01-01

    Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.

  8. A deterministic model of electron transport for electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Bünger, J.; Richter, S.; Torrilhon, M.

    2018-01-01

    Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.

  9. An Evaluation of the Predictability of Austral Summer Season Precipitation over South America.

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu

    2004-03-01

    In this study predictability of austral summer seasonal precipitation over South America is investigated using a 12-yr set of a 3.5-month range (seasonal) and a 17-yr range (continuous multiannual) five-member ensemble integrations of the Center for Ocean Land Atmosphere Studies (COLA) atmospheric general circulation model (AGCM). These integrations were performed with prescribed observed sea surface temperature (SST); therefore, skill attained represents an estimate of the upper bound of the skill achievable by COLA AGCM with predicted SST. The seasonal runs outperform the multiannual model integrations both in deterministic and probabilistic skill. The simulation of the January February March (JFM) seasonal climatology of precipitation is vastly superior in the seasonal runs except over the Nordeste region where the multiannual runs show a marginal improvement. The teleconnection of the ensemble mean JFM precipitation over tropical South America with global contemporaneous observed sea surface temperature in the seasonal runs conforms more closely to observations than in the multiannual runs. Both the sets of runs clearly beat persistence in predicting the interannual precipitation anomalies over the Amazon River basin, Nordeste, South Atlantic convergence zone, and subtropical South America. However, both types of runs display poorer simulations over subtropical regions than the tropical areas of South America. The examination of probabilistic skill of precipitation supports the conclusions from deterministic skill analysis that the seasonal runs yield superior simulations than the multiannual-type runs.

  10. On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.

  11. Cognitive Developmental Biology: History, Process and Fortune's Wheel

    ERIC Educational Resources Information Center

    Balaban, Evan

    2006-01-01

    Biological contributions to cognitive development continue to be conceived predominantly along deterministic lines, with proponents of different positions arguing about the preponderance of gene-based versus experience-based influences that organize brain circuits irreversibly during prenatal or early postnatal life, and evolutionary influences…

  12. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  13. A Random Variable Approach to Nuclear Targeting and Survivability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Undem, Halvor A.

    We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less

  14. Classification and unification of the microscopic deterministic traffic models.

    PubMed

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.

  15. A stochastic model for correlated protein motions

    NASA Astrophysics Data System (ADS)

    Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem

    2006-06-01

    A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.

  16. Model reduction of multiscale chemical langevin equations: a numerical case study.

    PubMed

    Sotiropoulos, Vassilios; Contou-Carrere, Marie-Nathalie; Daoutidis, Prodromos; Kaznessis, Yiannis N

    2009-01-01

    Two very important characteristics of biological reaction networks need to be considered carefully when modeling these systems. First, models must account for the inherent probabilistic nature of systems far from the thermodynamic limit. Often, biological systems cannot be modeled with traditional continuous-deterministic models. Second, models must take into consideration the disparate spectrum of time scales observed in biological phenomena, such as slow transcription events and fast dimerization reactions. In the last decade, significant efforts have been expended on the development of stochastic chemical kinetics models to capture the dynamics of biomolecular systems, and on the development of robust multiscale algorithms, able to handle stiffness. In this paper, the focus is on the dynamics of reaction sets governed by stiff chemical Langevin equations, i.e., stiff stochastic differential equations. These are particularly challenging systems to model, requiring prohibitively small integration step sizes. We describe and illustrate the application of a semianalytical reduction framework for chemical Langevin equations that results in significant gains in computational cost.

  17. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  18. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  19. A soft computing-based approach to optimise queuing-inventory control problem

    NASA Astrophysics Data System (ADS)

    Alaghebandha, Mohammad; Hajipour, Vahid

    2015-04-01

    In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.

  20. Hybrid stochastic and deterministic simulations of calcium blips.

    PubMed

    Rüdiger, S; Shuai, J W; Huisinga, W; Nagaiah, C; Warnecke, G; Parker, I; Falcke, M

    2007-09-15

    Intracellular calcium release is a prime example for the role of stochastic effects in cellular systems. Recent models consist of deterministic reaction-diffusion equations coupled to stochastic transitions of calcium channels. The resulting dynamics is of multiple time and spatial scales, which complicates far-reaching computer simulations. In this article, we introduce a novel hybrid scheme that is especially tailored to accurately trace events with essential stochastic variations, while deterministic concentration variables are efficiently and accurately traced at the same time. We use finite elements to efficiently resolve the extreme spatial gradients of concentration variables close to a channel. We describe the algorithmic approach and we demonstrate its efficiency compared to conventional methods. Our single-channel model matches experimental data and results in intriguing dynamics if calcium is used as charge carrier. Random openings of the channel accumulate in bursts of calcium blips that may be central for the understanding of cellular calcium dynamics.

  1. Detecting and disentangling nonlinear structure from solar flux time series

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.; Roszman, L.

    1992-01-01

    Interest in solar activity has grown in the past two decades for many reasons. Most importantly for flight dynamics, solar activity changes the atmospheric density, which has important implications for spacecraft trajectory and lifetime prediction. Building upon the previously developed Rayleigh-Benard nonlinear dynamic solar model, which exhibits many dynamic behaviors observed in the Sun, this work introduces new chaotic solar forecasting techniques. Our attempt to use recently developed nonlinear chaotic techniques to model and forecast solar activity has uncovered highly entangled dynamics. Numerical techniques for decoupling additive and multiplicative white noise from deterministic dynamics and examines falloff of the power spectra at high frequencies as a possible means of distinguishing deterministic chaos from noise than spectrally white or colored are presented. The power spectral techniques presented are less cumbersome than current methods for identifying deterministic chaos, which require more computationally intensive calculations, such as those involving Lyapunov exponents and attractor dimension.

  2. Noise induced phenomena in combustion

    NASA Astrophysics Data System (ADS)

    Liu, Hongliang

    Quantitative models of combustion usually consist of systems of deterministic differential equations. However, there are reasons to suspect that noise may have a significant influence. In this thesis, our primary objective is to study the effect of noise on measurable quantities in the combustion process. Our first study involves combustion in a homogeneous gas. With a one step reaction model, we analytically estimate the requirements under which noise is important to create significant differences. Our simulation shows that a bi-modality phenomenon appears when appropriate parameters are applied, which agrees with our analytical result. Our second study involves steady planar flames. We use a relatively complete chemical model of the H2/air reaction system, which contains all eight reactive species (H2, O2, H, O, OH, H2O, HO2, H2O2) and N2. Our mathematical model for this system is a reacting flow model. We derive noise terms related to transport processes by a method advocated by Landau & Lifshitz, and we also derive noise terms related to chemical reactions. We develop a code to simulate this system. The numerical implementation relies on a good Riemann solver, suitable initial and boundary conditions, and so on. We also implement a code on a continuation method, which not only can be used to study approximate properties of laminar flames under deterministic governing equations, but also eliminates the difficulty of providing a suitable initial condition for governing equations with noise. With numerical experiments, we find the difference of flame speed exist when the noise is turned on or off although it is small when compared with the influence of other parameters, for example, the equivalence ratio. It will be a starting point for further studies to include noise in combustion.

  3. Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing

    NASA Technical Reports Server (NTRS)

    Jones, Robert L.; Goode, Plesent W. (Technical Monitor)

    2000-01-01

    The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.

  4. The Deterministic Origins of Sexism.

    ERIC Educational Resources Information Center

    Perry, Melissa J.; Albee, George W.

    1998-01-01

    Discusses the physical, sexual, and psychological ramifications of biological determinism using examples from the global status of women's health, the continuation of female genital mutilation, and the history of sexist beliefs in psychology that serve a social control function of creating and defining women's psychopathology. (Author/SLD)

  5. Counting and classifying attractors in high dimensional dynamical systems.

    PubMed

    Bagley, R J; Glass, L

    1996-12-07

    Randomly connected Boolean networks have been used as mathematical models of neural, genetic, and immune systems. A key quantity of such networks is the number of basins of attraction in the state space. The number of basins of attraction changes as a function of the size of the network, its connectivity and its transition rules. In discrete networks, a simple count of the number of attractors does not reveal the combinatorial structure of the attractors. These points are illustrated in a reexamination of dynamics in a class of random Boolean networks considered previously by Kauffman. We also consider comparisons between dynamics in discrete networks and continuous analogues. A continuous analogue of a discrete network may have a different number of attractors for many different reasons. Some attractors in discrete networks may be associated with unstable dynamics, and several different attractors in a discrete network may be associated with a single attractor in the continuous case. Special problems in determining attractors in continuous systems arise when there is aperiodic dynamics associated with quasiperiodicity of deterministic chaos.

  6. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    NASA Astrophysics Data System (ADS)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons & Fractals, 28, 337-360 (2006).[5] Mangiarotti, Coudret, Drapeau, & Jarlan, Polynomial search and global modeling, Phys. Rev. E 86(4), 046205 (2012).[6] Mangiarotti, Modélisation globale et Caractérisation Topologique de dynamiques environnementales. Habilitation à Diriger des Recherches, Univ. Toulouse 3 (2014).

  7. CFD-RANS prediction of individual exposure from continuous release of hazardous airborne materials in complex urban environments

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.; Berbekar, E.; Harms, F.; Leitl, B.

    2017-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict individual exposure (maximum dosages) of an airborne material which is released continuously from a point source. The present work addresses the question whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict individual exposure for various exposure times. This is feasible by providing the two RANS concentration moments (mean and variance) and a turbulent time scale to a deterministic model. The whole effort is focused on the prediction of individual exposure inside a complex real urban area. The capabilities of the proposed methodology are validated against wind-tunnel data (CUTE experiment). The present simulations were performed 'blindly', i.e. the modeller had limited information for the inlet boundary conditions and the results were kept unknown until the end of the COST Action ES1006. Thus, a high uncertainty of the results was expected. The general performance of the methodology due to this 'blind' strategy is good. The validation metrics fulfil the acceptance criteria. The effect of the grid and the turbulence model on the model performance is examined.

  8. Stochastic Analysis and Probabilistic Downscaling of Soil Moisture

    NASA Astrophysics Data System (ADS)

    Deshon, J. P.; Niemann, J. D.; Green, T. R.; Jones, A. S.

    2017-12-01

    Soil moisture is a key variable for rainfall-runoff response estimation, ecological and biogeochemical flux estimation, and biodiversity characterization, each of which is useful for watershed condition assessment. These applications require not only accurate, fine-resolution soil-moisture estimates but also confidence limits on those estimates and soil-moisture patterns that exhibit realistic statistical properties (e.g., variance and spatial correlation structure). The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution (9-40 km) soil moisture from satellite remote sensing or land-surface models to produce fine-resolution (10-30 m) estimates. The model was designed to produce accurate deterministic soil-moisture estimates at multiple points, but the resulting patterns do not reproduce the variance or spatial correlation of observed soil-moisture patterns. The primary objective of this research is to generalize the EMT+VS model to produce a probability density function (pdf) for soil moisture at each fine-resolution location and time. Each pdf has a mean that is equal to the deterministic soil-moisture estimate, and the pdf can be used to quantify the uncertainty in the soil-moisture estimates and to simulate soil-moisture patterns. Different versions of the generalized model are hypothesized based on how uncertainty enters the model, whether the uncertainty is additive or multiplicative, and which distributions describe the uncertainty. These versions are then tested by application to four catchments with detailed soil-moisture observations (Tarrawarra, Satellite Station, Cache la Poudre, and Nerrigundah). The performance of the generalized models is evaluated by comparing the statistical properties of the simulated soil-moisture patterns to those of the observations and the deterministic EMT+VS model. The versions of the generalized EMT+VS model with normally distributed stochastic components produce soil-moisture patterns with more realistic statistical properties than the deterministic model. Additionally, the results suggest that the variance and spatial correlation of the stochastic soil-moisture variations do not vary consistently with the spatial-average soil moisture.

  9. Multi-Scale Modeling of the Gamma Radiolysis of Nitrate Solutions.

    PubMed

    Horne, Gregory P; Donoclift, Thomas A; Sims, Howard E; Orr, Robin M; Pimblott, Simon M

    2016-11-17

    A multiscale modeling approach has been developed for the extended time scale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages: radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modeling. The first three components model the physical and chemical evolution of an isolated radiation chemical track and provide radiolysis yields, within the extremely low dose isolated track paradigm, as the input parameters for a bulk deterministic chemistry model. This approach to radiation chemical modeling has been tested by comparison with the experimentally observed yield of nitrite from the gamma radiolysis of sodium nitrate solutions. This is a complex radiation chemical system which is strongly dependent on secondary reaction processes. The concentration of nitrite is not just dependent upon the evolution of radiation track chemistry and the scavenging of the hydrated electron and its precursors but also on the subsequent reactions of the products of these scavenging reactions with other water radiolysis products. Without the inclusion of intratrack chemistry, the deterministic component of the multiscale model is unable to correctly predict experimental data, highlighting the importance of intratrack radiation chemistry in the chemical evolution of the irradiated system.

  10. A deterministic width function model

    NASA Astrophysics Data System (ADS)

    Puente, C. E.; Sivakumar, B.

    Use of a deterministic fractal-multifractal (FM) geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States), that the FM approach may also be used to closely approximate existing width functions.

  11. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  12. Nanopore Current Oscillations: Nonlinear Dynamics on the Nanoscale.

    PubMed

    Hyland, Brittany; Siwy, Zuzanna S; Martens, Craig C

    2015-05-21

    In this Letter, we describe theoretical modeling of an experimentally realized nanoscale system that exhibits the general universal behavior of a nonlinear dynamical system. In particular, we consider the description of voltage-induced current fluctuations through a single nanopore from the perspective of nonlinear dynamics. We briefly review the experimental system and its behavior observed and then present a simple phenomenological nonlinear model that reproduces the qualitative behavior of the experimental data. The model consists of a two-dimensional deterministic nonlinear bistable oscillator experiencing both dissipation and random noise. The multidimensionality of the model and the interplay between deterministic and stochastic forces are both required to obtain a qualitatively accurate description of the physical system.

  13. A data driven nonlinear stochastic model for blood glucose dynamics.

    PubMed

    Zhang, Yan; Holt, Tim A; Khovanova, Natalia

    2016-03-01

    The development of adequate mathematical models for blood glucose dynamics may improve early diagnosis and control of diabetes mellitus (DM). We have developed a stochastic nonlinear second order differential equation to describe the response of blood glucose concentration to food intake using continuous glucose monitoring (CGM) data. A variational Bayesian learning scheme was applied to define the number and values of the system's parameters by iterative optimisation of free energy. The model has the minimal order and number of parameters to successfully describe blood glucose dynamics in people with and without DM. The model accounts for the nonlinearity and stochasticity of the underlying glucose-insulin dynamic process. Being data-driven, it takes full advantage of available CGM data and, at the same time, reflects the intrinsic characteristics of the glucose-insulin system without detailed knowledge of the physiological mechanisms. We have shown that the dynamics of some postprandial blood glucose excursions can be described by a reduced (linear) model, previously seen in the literature. A comprehensive analysis demonstrates that deterministic system parameters belong to different ranges for diabetes and controls. Implications for clinical practice are discussed. This is the first study introducing a continuous data-driven nonlinear stochastic model capable of describing both DM and non-DM profiles. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  14. The Stochastic Multi-strain Dengue Model: Analysis of the Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2011-09-01

    Dengue dynamics is well known to be particularly complex with large fluctuations of disease incidences. An epidemic multi-strain model motivated by dengue fever epidemiology shows deterministic chaos in wide parameter regions. The addition of seasonal forcing, mimicking the vectorial dynamics, and a low import of infected individuals, which is realistic in the dynamics of infectious diseases epidemics show complex dynamics and qualitatively a good agreement between empirical DHF monitoring data and the obtained model simulation. The addition of noise can explain the fluctuations observed in the empirical data and for large enough population size, the stochastic system can be well described by the deterministic skeleton.

  15. Identification of gene regulation models from single-cell data

    NASA Astrophysics Data System (ADS)

    Weber, Lisa; Raymond, William; Munsky, Brian

    2018-09-01

    In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.

  16. Estimate Tsunami Flow Conditions and Large-Debris Tracks for the Design of Coastal Infrastructures along Coastlines of the U.S. Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Thomas, S.; Zhou, H.; Arcas, D.; Titov, V. V.

    2017-12-01

    The increasing potential tsunami hazards pose great challenges for infrastructures along the coastlines of the U.S. Pacific Northwest. Tsunami impact at a coastal site is usually assessed from deterministic scenarios based on 10,000 years of geological records in the Cascadia Subduction Zone (CSZ). Aside from these deterministic methods, the new ASCE 7-16 tsunami provisions provide engineering design criteria of tsunami loads on buildings based on a probabilistic approach. This work develops a site-specific model near Newport, OR using high-resolution grids, and compute tsunami inundation depth and velocities at the study site resulted from credible probabilistic and deterministic earthquake sources in the Cascadia Subduction Zone. Three Cascadia scenarios, two deterministic scenarios, XXL1 and L1, and a 2,500-yr probabilistic scenario compliant with the new ASCE 7-16 standard, are simulated using combination of a depth-averaged shallow water model for offshore propagation and a Boussinesq-type model for onshore inundation. We speculate on the methods and procedure to obtain the 2,500-year probabilistic scenario for Newport that is compliant with the ASCE 7-16 tsunami provisions. We provide details of model results, particularly the inundation depth and flow speed for a new building, which will also be designated as a tsunami vertical evacuation shelter, at Newport, Oregon. We show that the ASCE 7-16 consistent hazards are between those obtained from deterministic L1 and XXL1 scenarios, and the greatest impact on the building may come from later waves. As a further step, we utilize the inundation model results to numerically compute tracks of large vessels in the vicinity of the building site and estimate if these vessels will impact on the building site during the extreme XXL1 and ASCE 7-16 hazard-consistent scenarios. Two-step study is carried out first to study tracks of massless particles and then large vessels with assigned mass considering drag force, inertial force, ship grounding and mooring. The simulation results show that none of the large vessels will impact on the building site in all tested scenarios.

  17. The Stochastic Modelling of Endemic Diseases

    NASA Astrophysics Data System (ADS)

    Susvitasari, Kurnia; Siswantining, Titin

    2017-01-01

    A study about epidemic has been conducted since a long time ago, but genuine progress was hardly forthcoming until the end of the 19th century (Bailey, 1975). Both deterministic and stochastic models were used to describe these. Then, from 1927 to 1939 Kermack and McKendrick introduced a generality of this model, including some variables to consider such as rate of infection and recovery. The purpose of this project is to investigate the behaviour of the models when we set the basic reproduction number, R0. This quantity is defined as the expected number of contacts made by a typical infective to susceptibles in the population. According to the epidemic threshold theory, when R0 ≤ 1, minor epidemic occurs with probability one in both approaches, but when R0 > 1, the deterministic and stochastic models have different interpretation. In the deterministic approach, major epidemic occurs with probability one when R0 > 1 and predicts that the disease will settle down to an endemic equilibrium. Stochastic models, on the other hand, identify that the minor epidemic can possibly occur. If it does, then the epidemic will die out quickly. Moreover, if we let the population size be large and the major epidemic occurs, then it will take off and then reach the endemic level and move randomly around the deterministic’s equilibrium.

  18. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China.

    PubMed

    Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw.

  19. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China

    PubMed Central

    Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C.; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw. PMID:26699734

  20. Economic analysis of interventions to improve village chicken production in Myanmar.

    PubMed

    Henning, J; Morton, J; Pym, R; Hla, T; Sunn, K; Meers, J

    2013-07-01

    A cost-benefit analysis using deterministic and stochastic modelling was conducted to identify the net benefits for households that adopt (1) vaccination of individual birds against Newcastle disease (ND) or (2) improved management of chick rearing by providing coops for the protection of chicks from predation and chick starter feed inside a creep feeder to support chicks' nutrition in village chicken flocks in Myanmar. Partial budgeting was used to assess the additional costs and benefits associated with each of the two interventions tested relative to neither strategy. In the deterministic model, over the first 3 years after the introduction of the interventions, the cumulative sum of the net differences from neither strategy was 13,189Kyat for ND vaccination and 77,645Kyat for improved chick management (effective exchange rate in 2005: 1000Kyat=1$US). Both interventions were also profitable after discounting over a 10-year period; Net Present Values for ND vaccination and improved chick management were 30,791 and 167,825Kyat, respectively. The Benefit-Cost Ratio for ND vaccination was very high (28.8). This was lower for improved chick management, due to greater costs of the intervention, but still favourable at 4.7. Using both interventions concurrently yielded a Net Present Value of 470,543Kyat and a Benefit-Cost Ratio of 11.2 over the 10-year period in the deterministic model. Using the stochastic model, for the first 3 years following the introduction of the interventions, the mean cumulative sums of the net difference were similar to those values obtained from the deterministic model. Sensitivity analysis indicated that the cumulative net differences were strongly influenced by grower bird sale income, particularly under improved chick management. The effects of the strategies on odds of households selling and consuming birds after 7 months, and numbers of birds being sold or consumed after this period also influenced profitability. Cost variations for equipment used under improved chick management were not markedly associated with profitability. Net Present Values and Benefit-Cost Ratios discounted over a 10-year period were also similar to the deterministic model when mean values obtained through stochastic modelling were used. In summary, the study showed that ND vaccination and improved chick management can improve the viability and profitability of village chicken production in Myanmar. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Operator product expansion in Liouville field theory and Seiberg-type transitions in log-correlated random energy models

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyu; Le Doussal, Pierre; Rosso, Alberto; Santachiara, Raoul

    2018-04-01

    We study transitions in log-correlated random energy models (logREMs) that are related to the violation of a Seiberg bound in Liouville field theory (LFT): the binding transition and the termination point transition (a.k.a., pre-freezing). By means of LFT-logREM mapping, replica symmetry breaking and traveling-wave equation techniques, we unify both transitions in a two-parameter diagram, which describes the free-energy large deviations of logREMs with a deterministic background log potential, or equivalently, the joint moments of the free energy and Gibbs measure in logREMs without background potential. Under the LFT-logREM mapping, the transitions correspond to the competition of discrete and continuous terms in a four-point correlation function. Our results provide a statistical interpretation of a peculiar nonlocality of the operator product expansion in LFT. The results are rederived by a traveling-wave equation calculation, which shows that the features of LFT responsible for the transitions are reproduced in a simple model of diffusion with absorption. We examine also the problem by a replica symmetry breaking analysis. It complements the previous methods and reveals a rich large deviation structure of the free energy of logREMs with a deterministic background log potential. Many results are verified in the integrable circular logREM, by a replica-Coulomb gas integral approach. The related problem of common length (overlap) distribution is also considered. We provide a traveling-wave equation derivation of the LFT predictions announced in a precedent work.

  2. A parametric LQ approach to multiobjective control system design

    NASA Technical Reports Server (NTRS)

    Kyr, Douglas E.; Buchner, Marc

    1988-01-01

    The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.

  3. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  4. Variational principles for stochastic fluid dynamics

    PubMed Central

    Holm, Darryl D.

    2015-01-01

    This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083

  5. Demographic noise can reverse the direction of deterministic selection

    PubMed Central

    Constable, George W. A.; Rogers, Tim; McKane, Alan J.; Tarnita, Corina E.

    2016-01-01

    Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to r−K theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085

  6. Tiedeman's Approach to Career Development.

    ERIC Educational Resources Information Center

    Harren, Vincent A.

    Basic to Tiedeman's approach to career development and decision making is the assumption that one is responsible for one's own behavior because one has the capacity for choice and lives in a world which is not deterministic. Tiedeman, a cognitive-developmental theorist, views continuity of development as internal or psychological while…

  7. 76 FR 72220 - Incorporation of Risk Management Concepts in Regulatory Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-22

    ... and support the adoption of improved designs or processes. \\1\\ A deterministic approach to regulation... longstanding goal to move toward more risk-informed, performance- based approaches in its regulatory programs... regulatory approach that would continue to ensure the safe and secure use of nuclear material. As part of...

  8. From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities

    NASA Astrophysics Data System (ADS)

    Kunjwal, Ravi; Spekkens, Robert W.

    2018-05-01

    The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.

  9. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  10. Digital flow model of the Chowan River estuary, North Carolina

    USGS Publications Warehouse

    Daniel, C.C.

    1977-01-01

    A one-dimensional deterministic flow model based on the continuity equation had been developed to provide estimates of daily flow past a number of points on the Chowan River estuary of northeast North Carolina. The digital model, programmed in Fortran IV, computes daily average discharge for nine sites; four of these represent inflow at the mouths of major tributaries, the five other sites are at stage stations along the estuary. Because flows within the Chowan River and the lower reaches of its tributaries are tidally affected, flows occur in both upstream and downstream directions. The period of record generated by the model extends from April 1, 1974, to March 31, 1976. During the two years of model operation the average discharge at Edenhouse near the mouth of the estuary was 5,830 cfs (cubic feet per second). Daily average flows during this period ranged from 55,900 cfs in the downstream direction on July 17, 1975, to 14,200 cfs in the upstream direction on November 30, 1974

  11. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  12. Rare event computation in deterministic chaotic systems using genealogical particle analysis

    NASA Astrophysics Data System (ADS)

    Wouters, J.; Bouchet, F.

    2016-09-01

    In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein-Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function.

  13. Pyrotechnic modeling for the NSI and pin puller

    NASA Technical Reports Server (NTRS)

    Powers, Joseph M.; Gonthier, Keith A.

    1993-01-01

    A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.

  14. Deterministic multi-zone ice accretion modeling

    NASA Technical Reports Server (NTRS)

    Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael

    1991-01-01

    The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.

  15. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  16. A systematic review of models used in cost-effectiveness analyses of preventing osteoporotic fractures.

    PubMed

    Si, L; Winzenberg, T M; Palmer, A J

    2014-01-01

    This review was aimed at the evolution of health economic models used in evaluations of clinical approaches aimed at preventing osteoporotic fractures. Models have improved, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes, as well as advancements in epidemiological data. Model-based health economic evaluation studies are increasingly used to investigate the cost-effectiveness of osteoporotic fracture preventions and treatments. The objective of this study was to carry out a systematic review of the evolution of health economic models used in the evaluation of osteoporotic fracture preventions. Electronic searches within MEDLINE and EMBASE were carried out using a predefined search strategy. Inclusion and exclusion criteria were used to select relevant studies. References listed of included studies were searched to identify any potential study that was not captured in our electronic search. Data on country, interventions, type of fracture prevention, evaluation perspective, type of model, time horizon, fracture sites, expressed costs, types of costs included, and effectiveness measurement were extracted. Seventy-four models were described in 104 publications, of which 69% were European. Earlier models focused mainly on hip, vertebral, and wrist fracture, but later models included multiple fracture sites (humerus, pelvis, tibia, and other fractures). Modeling techniques have evolved from simple decision trees, through deterministic Markov processes to individual patient simulation models accounting for uncertainty in multiple parameters. Treatment continuance has been increasingly taken into account in the models in the last decade. Models have evolved in their complexity and emphasis, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes. This evolution may be driven in part by the desire to capture all the important differentiating characteristics of medications under scrutiny, as well as the advancement in epidemiological data relevant to osteoporosis fractures.

  17. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    PubMed

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  18. Developing a stochastic conflict resolution model for urban runoff quality management: Application of info-gap and bargaining theories

    NASA Astrophysics Data System (ADS)

    Ghodsi, Seyed Hamed; Kerachian, Reza; Estalaki, Siamak Malakpour; Nikoo, Mohammad Reza; Zahmatkesh, Zahra

    2016-02-01

    In this paper, two deterministic and stochastic multilateral, multi-issue, non-cooperative bargaining methodologies are proposed for urban runoff quality management. In the proposed methodologies, a calibrated Storm Water Management Model (SWMM) is used to simulate stormwater runoff quantity and quality for different urban stormwater runoff management scenarios, which have been defined considering several Low Impact Development (LID) techniques. In the deterministic methodology, the best management scenario, representing location and area of LID controls, is identified using the bargaining model. In the stochastic methodology, uncertainties of some key parameters of SWMM are analyzed using the info-gap theory. For each water quality management scenario, robustness and opportuneness criteria are determined based on utility functions of different stakeholders. Then, to find the best solution, the bargaining model is performed considering a combination of robustness and opportuneness criteria for each scenario based on utility function of each stakeholder. The results of applying the proposed methodology in the Velenjak urban watershed located in the northeastern part of Tehran, the capital city of Iran, illustrate its practical utility for conflict resolution in urban water quantity and quality management. It is shown that the solution obtained using the deterministic model cannot outperform the result of the stochastic model considering the robustness and opportuneness criteria. Therefore, it can be concluded that the stochastic model, which incorporates the main uncertainties, could provide more reliable results.

  19. Deterministic Stress Modeling of Hot Gas Segregation in a Turbine

    NASA Technical Reports Server (NTRS)

    Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger

    1998-01-01

    Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.

  20. Time-programmable drug dosing allows the manipulation, suppression and reversal of antibiotic drug resistance in vitro

    NASA Astrophysics Data System (ADS)

    Yoshida, Mari; Reyes, Sabrina Galiñanes; Tsuda, Soichiro; Horinouchi, Takaaki; Furusawa, Chikara; Cronin, Leroy

    2017-06-01

    Multi-drug strategies have been attempted to prolong the efficacy of existing antibiotics, but with limited success. Here we show that the evolution of multi-drug-resistant Escherichia coli can be manipulated in vitro by administering pairs of antibiotics and switching between them in ON/OFF manner. Using a multiplexed cell culture system, we find that switching between certain combinations of antibiotics completely suppresses the development of resistance to one of the antibiotics. Using this data, we develop a simple deterministic model, which allows us to predict the fate of multi-drug evolution in this system. Furthermore, we are able to reverse established drug resistance based on the model prediction by modulating antibiotic selection stresses. Our results support the idea that the development of antibiotic resistance may be potentially controlled via continuous switching of drugs.

  1. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  2. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    PubMed

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  3. Stochastic von Bertalanffy models, with applications to fish recruitment.

    PubMed

    Lv, Qiming; Pitchford, Jonathan W

    2007-02-21

    We consider three individual-based models describing growth in stochastic environments. Stochastic differential equations (SDEs) with identical von Bertalanffy deterministic parts are formulated, with a stochastic term which decreases, remains constant, or increases with organism size, respectively. Probability density functions for hitting times are evaluated in the context of fish growth and mortality. Solving the hitting time problem analytically or numerically shows that stochasticity can have a large positive impact on fish recruitment probability. It is also demonstrated that the observed mean growth rate of surviving individuals always exceeds the mean population growth rate, which itself exceeds the growth rate of the equivalent deterministic model. The consequences of these results in more general biological situations are discussed.

  4. Deterministic SLIR model for tuberculosis disease mapping

    NASA Astrophysics Data System (ADS)

    Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat

    2017-11-01

    Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.

  5. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  6. Evaluating the risk of death via the hematopoietic syndrome mode for prolonged exposure of nuclear workers to radiation delivered at very low rates.

    PubMed

    Scott, B R; Lyzlov, A F; Osovets, S V

    1998-05-01

    During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A; and (6) the expected cases of death via the hematopoietic syndrome mode for Mayak workers chronically exposed during work shifts at Site A to gamma rays and neutrons can be predicted using ln(2)B M[D]; where B (pronounced "beh") is the number of workers at risk (criticality accident victims excluded); and M[D] is the average (mean) value of D (averaged over the worker population at risk, for Site A, for the time period considered). These results can be used to facilitate a Phase II study of deterministic radiation effects among Mayak workers chronically exposed to gamma rays and neutrons.

  7. Interactive Reliability Model for Whisker-toughened Ceramics

    NASA Technical Reports Server (NTRS)

    Palko, Joseph L.

    1993-01-01

    Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.

  8. Nonclassical point of view of the Brownian motion generation via fractional deterministic model

    NASA Astrophysics Data System (ADS)

    Gilardi-Velázquez, H. E.; Campos-Cantón, E.

    In this paper, we present a dynamical system based on the Langevin equation without stochastic term and using fractional derivatives that exhibit properties of Brownian motion, i.e. a deterministic model to generate Brownian motion is proposed. The stochastic process is replaced by considering an additional degree of freedom in the second-order Langevin equation. Thus, it is transformed into a system of three first-order linear differential equations, additionally α-fractional derivative are considered which allow us to obtain better statistical properties. Switching surfaces are established as a part of fluctuating acceleration. The final system of three α-order linear differential equations does not contain a stochastic term, so the system generates motion in a deterministic way. Nevertheless, from the time series analysis, we found that the behavior of the system exhibits statistics properties of Brownian motion, such as, a linear growth in time of mean square displacement, a Gaussian distribution. Furthermore, we use the detrended fluctuation analysis to prove the Brownian character of this motion.

  9. Distinct Sources of Deterministic and Stochastic Components of Action Timing Decisions in Rodent Frontal Cortex.

    PubMed

    Murakami, Masayoshi; Shteingart, Hanan; Loewenstein, Yonatan; Mainen, Zachary F

    2017-05-17

    The selection and timing of actions are subject to determinate influences such as sensory cues and internal state as well as to effectively stochastic variability. Although stochastic choice mechanisms are assumed by many theoretical models, their origin and mechanisms remain poorly understood. Here we investigated this issue by studying how neural circuits in the frontal cortex determine action timing in rats performing a waiting task. Electrophysiological recordings from two regions necessary for this behavior, medial prefrontal cortex (mPFC) and secondary motor cortex (M2), revealed an unexpected functional dissociation. Both areas encoded deterministic biases in action timing, but only M2 neurons reflected stochastic trial-by-trial fluctuations. This differential coding was reflected in distinct timescales of neural dynamics in the two frontal cortical areas. These results suggest a two-stage model in which stochastic components of action timing decisions are injected by circuits downstream of those carrying deterministic bias signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Developing deterioration models for Wyoming bridges.

    DOT National Transportation Integrated Search

    2016-05-01

    Deterioration models for the Wyoming Bridge Inventory were developed using both stochastic and deterministic models. : The selection of explanatory variables is investigated and a new method using LASSO regression to eliminate human bias : in explana...

  11. Short-range solar radiation forecasts over Sweden

    NASA Astrophysics Data System (ADS)

    Landelius, Tomas; Lindskog, Magnus; Körnich, Heiner; Andersson, Sandra

    2018-04-01

    In this article the performance for short-range solar radiation forecasts by the global deterministic and ensemble models from the European Centre for Medium-Range Weather Forecasts (ECMWF) is compared with an ensemble of the regional mesoscale model HARMONIE-AROME used by the national meteorological services in Sweden, Norway and Finland. Note however that only the control members and the ensemble means are included in the comparison. The models resolution differs considerably with 18 km for the ECMWF ensemble, 9 km for the ECMWF deterministic model, and 2.5 km for the HARMONIE-AROME ensemble. The models share the same radiation code. It turns out that they all underestimate systematically the Direct Normal Irradiance (DNI) for clear-sky conditions. Except for this shortcoming, the HARMONIE-AROME ensemble model shows the best agreement with the distribution of observed Global Horizontal Irradiance (GHI) and DNI values. During mid-day the HARMONIE-AROME ensemble mean performs best. The control member of the HARMONIE-AROME ensemble also scores better than the global deterministic ECMWF model. This is an interesting result since mesoscale models have so far not shown good results when compared to the ECMWF models. Three days with clear, mixed and cloudy skies are used to illustrate the possible added value of a probabilistic forecast. It is shown that in these cases the mesoscale ensemble could provide decision support to a grid operator in terms of forecasts of both the amount of solar power and its probabilities.

  12. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  13. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  14. Assessing Continuous Operator Workload With a Hybrid Scaffolded Neuroergonomic Modeling Approach.

    PubMed

    Borghetti, Brett J; Giametta, Joseph J; Rusnock, Christina F

    2017-02-01

    We aimed to predict operator workload from neurological data using statistical learning methods to fit neurological-to-state-assessment models. Adaptive systems require real-time mental workload assessment to perform dynamic task allocations or operator augmentation as workload issues arise. Neuroergonomic measures have great potential for informing adaptive systems, and we combine these measures with models of task demand as well as information about critical events and performance to clarify the inherent ambiguity of interpretation. We use machine learning algorithms on electroencephalogram (EEG) input to infer operator workload based upon Improved Performance Research Integration Tool workload model estimates. Cross-participant models predict workload of other participants, statistically distinguishing between 62% of the workload changes. Machine learning models trained from Monte Carlo resampled workload profiles can be used in place of deterministic workload profiles for cross-participant modeling without incurring a significant decrease in machine learning model performance, suggesting that stochastic models can be used when limited training data are available. We employed a novel temporary scaffold of simulation-generated workload profile truth data during the model-fitting process. A continuous workload profile serves as the target to train our statistical machine learning models. Once trained, the workload profile scaffolding is removed and the trained model is used directly on neurophysiological data in future operator state assessments. These modeling techniques demonstrate how to use neuroergonomic methods to develop operator state assessments, which can be employed in adaptive systems.

  15. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    USGS Publications Warehouse

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  16. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. CalTOX (registered trademark), A multimedia total exposure model spreadsheet user's guide. Version 4.0(Beta)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, T.E.; Enoch, K.G.

    2002-08-01

    CalTOX has been developed as a set of spreadsheet models and spreadsheet data sets to assist in assessing human exposures from continuous releases to multiple environmental media, i.e. air, soil, and water. It has also been used for waste classification and for setting soil clean-up levels at uncontrolled hazardous wastes sites. The modeling components of CalTOX include a multimedia transport and transformation model, multi-pathway exposure scenario models, and add-ins to quantify and evaluate uncertainty and variability. All parameter values used as inputs to CalTOX are distributions, described in terms of mean values and a coefficient of variation, rather than asmore » point estimates or plausible upper values such as most other models employ. This probabilistic approach allows both sensitivity and uncertainty analyses to be directly incorporated into the model operation. This manual provides CalTOX users with a brief overview of the CalTOX spreadsheet model and provides instructions for using the spreadsheet to make deterministic and probabilistic calculations of source-dose-risk relationships.« less

  18. Application of MUSLE for the prediction of phosphorus losses.

    PubMed

    Noor, Hamze; Mirnia, Seyed Khalagh; Fazli, Somaye; Raisi, Mohamad Bagher; Vafakhah, Mahdi

    2010-01-01

    Soil erosion in forestlands affects not only land productivity but also the water body down stream. The Universal Soil Loss Equation (USLE) has been applied broadly for the prediction of soil loss from upland fields. However, there are few reports concerning the prediction of nutrient (P) losses based on the USLE and its versions. The present study was conducted to evaluate the applicability of the deterministic model Modified Universal Soil Loss Equation (MUSLE) to estimation of phosphorus losses in the Kojor forest watershed, northern Iran. The model was tested and calibrated using accurate continuous P loss data collected during seven storm events in 2008. Results of the original model simulations for storm-wise P loss did not match the observed data, while the revised version of the model could imitate the observed values well. The results of the study approved the efficient application of the revised MUSLE in estimating storm-wise P losses in the study area with a high level of agreement of beyond 93%, an acceptable estimation error of some 35%.

  19. Evolutionary suicide through a non-catastrophic bifurcation: adaptive dynamics of pathogens with frequency-dependent transmission.

    PubMed

    Boldin, Barbara; Kisdi, Éva

    2016-03-01

    Evolutionary suicide is a riveting phenomenon in which adaptive evolution drives a viable population to extinction. Gyllenberg and Parvinen (Bull Math Biol 63(5):981-993, 2001) showed that, in a wide class of deterministic population models, a discontinuous transition to extinction is a necessary condition for evolutionary suicide. An implicit assumption of their proof is that the invasion fitness of a rare strategy is well-defined also in the extinction state of the population. Epidemic models with frequency-dependent incidence, which are often used to model the spread of sexually transmitted infections or the dynamics of infectious diseases within herds, violate this assumption. In these models, evolutionary suicide can occur through a non-catastrophic bifurcation whereby pathogen adaptation leads to a continuous decline of host (and consequently pathogen) population size to zero. Evolutionary suicide of pathogens with frequency-dependent transmission can occur in two ways, with pathogen strains evolving either higher or lower virulence.

  20. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  1. Real-time flood forecasting

    USGS Publications Warehouse

    Lai, C.; Tsay, T.-K.; Chien, C.-H.; Wu, I.-L.

    2009-01-01

    Researchers at the Hydroinformatic Research and Development Team (HIRDT) of the National Taiwan University undertook a project to create a real time flood forecasting model, with an aim to predict the current in the Tamsui River Basin. The model was designed based on deterministic approach with mathematic modeling of complex phenomenon, and specific parameter values operated to produce a discrete result. The project also devised a rainfall-stage model that relates the rate of rainfall upland directly to the change of the state of river, and is further related to another typhoon-rainfall model. The geographic information system (GIS) data, based on precise contour model of the terrain, estimate the regions that were perilous to flooding. The HIRDT, in response to the project's progress, also devoted their application of a deterministic model to unsteady flow of thermodynamics to help predict river authorities issue timely warnings and take other emergency measures.

  2. Fuzzy linear model for production optimization of mining systems with multiple entities

    NASA Astrophysics Data System (ADS)

    Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar

    2011-12-01

    Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.

  3. Spatio-temporal modelling of rainfall in the Murray-Darling Basin

    NASA Astrophysics Data System (ADS)

    Nowak, Gen; Welsh, A. H.; O'Neill, T. J.; Feng, Lingbing

    2018-02-01

    The Murray-Darling Basin (MDB) is a large geographical region in southeastern Australia that contains many rivers and creeks, including Australia's three longest rivers, the Murray, the Murrumbidgee and the Darling. Understanding rainfall patterns in the MDB is very important due to the significant impact major events such as droughts and floods have on agricultural and resource productivity. We propose a model for modelling a set of monthly rainfall data obtained from stations in the MDB and for producing predictions in both the spatial and temporal dimensions. The model is a hierarchical spatio-temporal model fitted to geographical data that utilises both deterministic and data-derived components. Specifically, rainfall data at a given location are modelled as a linear combination of these deterministic and data-derived components. A key advantage of the model is that it is fitted in a step-by-step fashion, enabling appropriate empirical choices to be made at each step.

  4. Individualism in plant populations: using stochastic differential equations to model individual neighbourhood-dependent plant growth.

    PubMed

    Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W

    2008-08-01

    We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.

  5. Stochastic Watershed Models for Risk Based Decision Making

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  6. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  7. A stochastic chemostat model with an inhibitor and noise independent of population sizes

    NASA Astrophysics Data System (ADS)

    Sun, Shulin; Zhang, Xiaolu

    2018-02-01

    In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.

  8. A Deterministic Interfacial Cyclic Oxidation Spalling Model. Part 1; Model Development and Parametric Response

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2002-01-01

    An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.

  9. Hydrologic Modeling at the National Water Center: Operational Implementation of the WRF-Hydro Model to support National Weather Service Hydrology

    NASA Astrophysics Data System (ADS)

    Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.

    2015-12-01

    The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of streamflow regulation.

  10. Population density equations for stochastic processes with memory kernels

    NASA Astrophysics Data System (ADS)

    Lai, Yi Ming; de Kamps, Marc

    2017-06-01

    We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

  11. Kinetics of Thermal Unimolecular Decomposition of Acetic Anhydride: An Integrated Deterministic and Stochastic Model.

    PubMed

    Mai, Tam V-T; Duong, Minh V; Nguyen, Hieu T; Lin, Kuang C; Huynh, Lam K

    2017-04-27

    An integrated deterministic and stochastic model within the master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) framework was first used to characterize temperature- and pressure-dependent behaviors of thermal decomposition of acetic anhydride in a wide range of conditions (i.e., 300-1500 K and 0.001-100 atm). Particularly, using potential energy surface and molecular properties obtained from high-level electronic structure calculations at CCSD(T)/CBS, macroscopic thermodynamic properties and rate coefficients of the title reaction were derived with corrections for hindered internal rotation and tunneling treatments. Being in excellent agreement with the scattered experimental data, the results from deterministic and stochastic frameworks confirmed and complemented each other to reveal that the main decomposition pathway proceeds via a 6-membered-ring transition state with the 0 K barrier of 35.2 kcal·mol -1 . This observation was further understood and confirmed by the sensitivity analysis on the time-resolved species profiles and the derived rate coefficients with respect to the ab initio barriers. Such an agreement suggests the integrated model can be confidently used for a wide range of conditions as a powerful postfacto and predictive tool in detailed chemical kinetic modeling and simulation for the title reaction and thus can be extended to complex chemical reactions.

  12. Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.

    PubMed

    Marino, Dale J; Starr, Thomas B

    2007-12-01

    A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.

  13. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  14. PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.

    1998-01-01

    PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.

  15. Nonlinear unitary quantum collapse model with self-generated noise

    NASA Astrophysics Data System (ADS)

    Geszti, Tamás

    2018-04-01

    Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.

  16. Toward a Deterministic Model of Planetary Formation. IV. Effects of Type I Migration

    NASA Astrophysics Data System (ADS)

    Ida, S.; Lin, D. N. C.

    2008-01-01

    In a further development of a deterministic planet formation model (Ida & Lin), we consider the effect of type I migration of protoplanetary embryos due to their tidal interaction with their nascent disks. During the early phase of protostellar disks, although embryos rapidly emerge in regions interior to the ice line, uninhibited type I migration leads to their efficient self-clearing. But embryos continue to form from residual planetesimals, repeatedly migrate inward, and provide a main channel of heavy-element accretion onto their host stars. During the advanced stages of disk evolution (a few Myr), the gas surface density declines to values comparable to or smaller than that of the minimum mass nebula model, and type I migration is no longer effective for Mars-mass embryos. Over wide ranges of initial disk surface densities and type I migration efficiencies, the surviving population of embryos interior to the ice line has a total mass of several M⊕. With this reservoir, there is an adequate inventory of residual embryos to subsequently assemble into rocky planets similar to those around the Sun. However, the onset of efficient gas accretion requires the emergence and retention of cores more massive than a few M⊕ prior to the severe depletion of the disk gas. The formation probability of gas giant planets and hence the predicted mass and semimajor axis distributions of extrasolar gas giants are sensitively determined by the strength of type I migration. We suggest that the distributions consistent with observations can be reproduced only if the actual type I migration timescale is at least an order of magnitude longer than that deduced from linear theories.

  17. Assessment of SWE data assimilation for ensemble streamflow predictions

    NASA Astrophysics Data System (ADS)

    Franz, Kristie J.; Hogue, Terri S.; Barik, Muhammad; He, Minxue

    2014-11-01

    An assessment of data assimilation (DA) for Ensemble Streamflow Prediction (ESP) using seasonal water supply hindcasting in the North Fork of the American River Basin (NFARB) and the National Weather Service (NWS) hydrologic forecast models is undertaken. Two parameter sets, one from the California Nevada River Forecast Center (RFC) and one from the Differential Evolution Adaptive Metropolis (DREAM) algorithm, are tested. For each parameter set, hindcasts are generated using initial conditions derived with and without the inclusion of a DA scheme that integrates snow water equivalent (SWE) observations. The DREAM-DA scenario uses an Integrated Uncertainty and Ensemble-based data Assimilation (ICEA) framework that also considers model and parameter uncertainty. Hindcasts are evaluated using deterministic and probabilistic forecast verification metrics. In general, the impact of DA on the skill of the seasonal water supply predictions is mixed. For deterministic (ensemble mean) predictions, the Percent Bias (PBias) is improved with integration of the DA. DREAM-DA and the RFC-DA have the lowest biases and the RFC-DA has the lowest Root Mean Squared Error (RMSE). However, the RFC and DREAM-DA have similar RMSE scores. For the probabilistic predictions, the RFC and DREAM have the highest Continuous Ranked Probability Skill Scores (CRPSS) and the RFC has the best discrimination for low flows. Reliability results are similar between the non-DA and DA tests and the DREAM and DREAM-DA have better reliability than the RFC and RFC-DA for forecast dates February 1 and later. Despite producing improved streamflow simulations in previous studies, the hindcast analysis suggests that the DA method tested may not result in obvious improvements in streamflow forecasts. We advocate that integration of hindcasting and probabilistic metrics provides more rigorous insight on model performance for forecasting applications, such as in this study.

  18. DIETARY EXPOSURES OF YOUNG CHILDREN, PART 3: MODELLING

    EPA Science Inventory

    A deterministic model was used to model dietary exposure of young children. Parameters included pesticide residue on food before handling, surface pesticide loading, transfer efficiencies and children's activity patterns. Three components of dietary pesticide exposure were includ...

  19. Mesoscopic and continuum modelling of angiogenesis

    PubMed Central

    Spill, F.; Guerrero, P.; Alarcon, T.; Maini, P. K.; Byrne, H. M.

    2016-01-01

    Angiogenesis is the formation of new blood vessels from pre-existing ones in response to chemical signals secreted by, for example, a wound or a tumour. In this paper, we propose a mesoscopic lattice-based model of angiogenesis, in which processes that include proliferation and cell movement are considered as stochastic events. By studying the dependence of the model on the lattice spacing and the number of cells involved, we are able to derive the deterministic continuum limit of our equations and compare it to similar existing models of angiogenesis. We further identify conditions under which the use of continuum models is justified, and others for which stochastic or discrete effects dominate. We also compare different stochastic models for the movement of endothelial tip cells which have the same macroscopic, deterministic behaviour, but lead to markedly different behaviour in terms of production of new vessel cells. PMID:24615007

  20. The role of adaptations in two-strain competition for sylvatic Trypanosoma cruzi transmission.

    PubMed

    Kribs-Zaleta, Christopher M; Mubayi, Anuj

    2012-01-01

    This study presents a continuous-time model for the sylvatic transmission dynamics of two strains of Trypanosoma cruzi enzootic in North America, in order to study the role that adaptations of each strain to distinct modes of transmission (classical stercorarian transmission on the one hand, and vertical and oral transmission on the other) may play in the competition between the two strains. A deterministic model incorporating contact process saturation predicts competitive exclusion, and reproductive numbers for the infection provide a framework for evaluating the competition in terms of adaptive trade-off between distinct transmission modes. Results highlight the importance of oral transmission in mediating the competition between horizontal (stercorarian) and vertical transmission; its presence as a competing contact process advantages vertical transmission even without adaptation to oral transmission, but such adaptation appears necessary to explain the persistence of (vertically-adapted) T. cruzi IV in raccoons and woodrats in the southeastern United States.

  1. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  2. Hidden order in crackling noise during peeling of an adhesive tape.

    PubMed

    Kumar, Jagadish; Ciccotti, M; Ananthakrishna, G

    2008-04-01

    We address the longstanding problem of recovering dynamical information from noisy acoustic emission signals arising from peeling of an adhesive tape subject to constant traction velocity. Using the phase space reconstruction procedure we demonstrate the deterministic chaotic dynamics by establishing the existence of correlation dimension as also a positive Lyapunov exponent in a midrange of traction velocities. The results are explained on the basis of the model that also emphasizes the deterministic origin of acoustic emission by clarifying its connection to stick-slip dynamics.

  3. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  4. Continuous generation and stabilization of mesoscopic field superposition states in a quantum circuit

    NASA Astrophysics Data System (ADS)

    Roy, Ananda; Leghtas, Zaki; Stone, A. Douglas; Devoret, Michel; Mirrahimi, Mazyar

    2015-01-01

    While dissipation is widely considered to be harmful for quantum coherence, it can, when properly engineered, lead to the stabilization of nontrivial pure quantum states. We propose a scheme for continuous generation and stabilization of Schrödinger cat states in a cavity using dissipation engineering. We first generate nonclassical photon states with definite parity by means of a two-photon drive and dissipation, and then stabilize these transient states against single-photon decay. The single-photon stabilization is autonomous, and is implemented through a second engineered bath, which exploits the photon-number-dependent frequency splitting due to Kerr interactions in the strongly dispersive regime of circuit QED. Starting with the Hamiltonian of the baths plus cavity, we derive an effective model of only the cavity photon states along with analytic expressions for relevant physical quantities, such as the stabilization rate. The deterministic generation of such cat states is one of the key ingredients in performing universal quantum computation.

  5. Combining Particle Filters and Consistency-Based Approaches for Monitoring and Diagnosis of Stochastic Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel

    2004-01-01

    Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.

  6. Symmetry aspects in emergent quantum mechanics

    NASA Astrophysics Data System (ADS)

    Elze, Hans-Thomas

    2009-06-01

    We discuss an explicit realization of the dissipative dynamics anticipated in the proof of 't Hooft's existence theorem, which states that 'For any quantum system there exists at least one deterministic model that reproduces all its dynamics after prequantization'. - There is an energy-parity symmetry hidden in the Liouville equation, which mimics the Kaplan-Sundrum protective symmetry for the cosmological constant. This symmetry may be broken by the coarse-graining inherent in physics at scales much larger than the Planck length. We correspondingly modify classical ensemble theory by incorporating dissipative fluctuations (information loss) - which are caused by discrete spacetime continually 'measuring' matter. In this way, aspects of quantum mechanics, such as the von Neumann equation, including a Lindblad term, arise dynamically and expectations of observables agree with the Born rule. However, the resulting quantum coherence is accompanied by an intrinsic decoherence and continuous localization mechanism. Our proposal leads towards a theory that is linear and local at the quantum mechanical level, but the relation to the underlying classical degrees of freedom is nonlocal.

  7. Effects of Noise on Ecological Invasion Processes: Bacteriophage-mediated Competition in Bacteria

    NASA Astrophysics Data System (ADS)

    Joo, Jaewook; Eric, Harvill; Albert, Reka

    2007-03-01

    Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.

  8. Analysis of stochastic model for non-linear volcanic dynamics

    NASA Astrophysics Data System (ADS)

    Alexandrov, D.; Bashkirtseva, I.; Ryashko, L.

    2014-12-01

    Motivated by important geophysical applications we consider a dynamic model of the magma-plug system previously derived by Iverson et al. (2006) under the influence of stochastic forcing. Due to strong nonlinearity of the friction force for solid plug along its margins, the initial deterministic system exhibits impulsive oscillations. Two types of dynamic behavior of the system under the influence of the parametric stochastic forcing have been found: random trajectories are scattered on both sides of the deterministic cycle or grouped on its internal side only. It is shown that dispersions are highly inhomogeneous along cycles in the presence of noises. The effects of noise-induced shifts, pressure stabilization and localization of random trajectories have been revealed with increasing the noise intensity. The plug velocity, pressure and displacement are highly dependent of noise intensity as well. These new stochastic phenomena are related with the nonlinear peculiarities of the deterministic phase portrait. It is demonstrated that the repetitive stick-slip motions of the magma-plug system in the case of stochastic forcing can be connected with drumbeat earthquakes.

  9. Modelling the interaction between flooding events and economic growth

    NASA Astrophysics Data System (ADS)

    Grames, J.; Prskawetz, A.; Grass, D.; Blöschl, G.

    2015-06-01

    Socio-hydrology describes the interaction between the socio-economy and water. Recent models analyze the interplay of community risk-coping culture, flooding damage and economic growth (Di Baldassarre et al., 2013; Viglione et al., 2014). These models descriptively explain the feedbacks between socio-economic development and natural disasters like floods. Contrary to these descriptive models, our approach develops an optimization model, where the intertemporal decision of an economic agent interacts with the hydrological system. In order to build this first economic growth model describing the interaction between the consumption and investment decisions of an economic agent and the occurrence of flooding events, we transform an existing descriptive stochastic model into an optimal deterministic model. The intermediate step is to formulate and simulate a descriptive deterministic model. We develop a periodic water function to approximate the former discrete stochastic time series of rainfall events. Due to the non-autonomous exogenous periodic rainfall function the long-term path of consumption and investment will be periodic.

  10. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  11. An operational real-time flood forecasting system in Southern Italy

    NASA Astrophysics Data System (ADS)

    Ortiz, Enrique; Coccia, Gabriele; Todini, Ezio

    2015-04-01

    A real-time flood forecasting system has been operating since year 2012 as a non-structural measure for mitigating the flood risk in Campania Region (Southern Italy), within the Sele river basin (3.240 km2). The Sele Flood Forecasting System (SFFS) has been built within the FEWS (Flood Early Warning System) platform developed by Deltares and it assimilates the numerical weather predictions of the COSMO LAM family: the deterministic COSMO-LAMI I2, the deterministic COSMO-LAMI I7 and the ensemble numerical weather predictions COSMO-LEPS (16 members). Sele FFS is composed by a cascade of three main models. The first model is a fully continuous physically based distributed hydrological model, named TOPKAPI-eXtended (Idrologia&Ambiente s.r.l., Naples, Italy), simulating the dominant processes controlling the soil water dynamics, runoff generation and discharge with a spatial resolution of 250 m. The second module is a set of Neural-Networks (ANN) built for forecasting the river stages at a set of monitored cross-sections. The third component is a Model Conditional Processor (MCP), which provides the predictive uncertainty (i.e., the probability of occurrence of a future flood event) within the framework of a multi-temporal forecast, according to the most recent advancements on this topic (Coccia and Todini, HESS, 2011). The MCP provides information about the probability of exceedance of a maximum river stage within the forecast lead time, by means of a discrete time function representing the variation of cumulative probability of exceeding a river stage during the forecast lead time and the distribution of the time occurrence of the flood peak, starting from one or more model forecasts. This work shows the Sele FFS performance after two years of operation, evidencing the added-values that can provide to a flood early warning and emergency management system.

  12. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  13. Climate change threatens polar bear populations: a stochastic demographic analysis.

    PubMed

    Hunter, Christine M; Caswell, Hal; Runge, Michael C; Regehr, Eric V; Amstrup, Steve C; Stirling, Ian

    2010-10-01

    The polar bear (Ursus maritimus) depends on sea ice for feeding, breeding, and movement. Significant reductions in Arctic sea ice are forecast to continue because of climate warming. We evaluated the impacts of climate change on polar bears in the southern Beaufort Sea by means of a demographic analysis, combining deterministic, stochastic, environment-dependent matrix population models with forecasts of future sea ice conditions from IPCC general circulation models (GCMs). The matrix population models classified individuals by age and breeding status; mothers and dependent cubs were treated as units. Parameter estimates were obtained from a capture-recapture study conducted from 2001 to 2006. Candidate statistical models allowed vital rates to vary with time and as functions of a sea ice covariate. Model averaging was used to produce the vital rate estimates, and a parametric bootstrap procedure was used to quantify model selection and parameter estimation uncertainty. Deterministic models projected population growth in years with more extensive ice coverage (2001-2003) and population decline in years with less ice coverage (2004-2005). LTRE (life table response experiment) analysis showed that the reduction in lambda in years with low sea ice was due primarily to reduced adult female survival, and secondarily to reduced breeding. A stochastic model with two environmental states, good and poor sea ice conditions, projected a declining stochastic growth rate, log lambdas, as the frequency of poor ice years increased. The observed frequency of poor ice years since 1979 would imply log lambdas approximately - 0.01, which agrees with available (albeit crude) observations of population size. The stochastic model was linked to a set of 10 GCMs compiled by the IPCC; the models were chosen for their ability to reproduce historical observations of sea ice and were forced with "business as usual" (A1B) greenhouse gas emissions. The resulting stochastic population projections showed drastic declines in the polar bear population by the end of the 21st century. These projections were instrumental in the decision to list the polar bear as a threatened species under the U.S. Endangered Species Act.

  14. Climate change threatens polar bear populations: A stochastic demographic analysis

    USGS Publications Warehouse

    Hunter, C.M.; Caswell, H.; Runge, M.C.; Regehr, E.V.; Amstrup, Steven C.; Stirling, I.

    2010-01-01

    The polar bear (Ursus maritimus) depends on sea ice for feeding, breeding, and movement. Significant reductions in Arctic sea ice are forecast to continue because of climate warming. We evaluated the impacts of climate change on polar bears in the southern Beaufort Sea by means of a demographic analysis, combining deterministic, stochastic, environment-dependent matrix population models with forecasts of future sea ice conditions from IPCC general circulation models (GCMs). The matrix population models classified individuals by age and breeding status; mothers and dependent cubs were treated as units. Parameter estimates were obtained from a capture-recapture study conducted from 2001 to 2006. Candidate statistical models allowed vital rates to vary with time and as functions of a sea ice covariate. Model averaging was used to produce the vital rate estimates, and a parametric bootstrap procedure was used to quantify model selection and parameter estimation uncertainty. Deterministic models projected population growth in years with more extensive ice coverage (2001-2003) and population decline in years with less ice coverage (2004-2005). LTRE (life table response experiment) analysis showed that the reduction in ?? in years with low sea ice was due primarily to reduced adult female survival, and secondarily to reduced breeding. A stochastic model with two environmental states, good and poor sea ice conditions, projected a declining stochastic growth rate, log ??s, as the frequency of poor ice years increased. The observed frequency of poor ice years since 1979 would imply log ??s ' - 0.01, which agrees with available (albeit crude) observations of population size. The stochastic model was linked to a set of 10 GCMs compiled by the IPCC; the models were chosen for their ability to reproduce historical observations of sea ice and were forced with "business as usual" (A1B) greenhouse gas emissions. The resulting stochastic population projections showed drastic declines in the polar bear population by the end of the 21st century. These projections were instrumental in the decision to list the polar bear as a threatened species under the U.S. Endangered Species Act. ?? 2010 by the Ecological Society of America.

  15. About the discrete-continuous nature of a hematopoiesis model for Chronic Myeloid Leukemia.

    PubMed

    Gaudiano, Marcos E; Lenaerts, Tom; Pacheco, Jorge M

    2016-12-01

    Blood of mammals is composed of a variety of cells suspended in a fluid medium known as plasma. Hematopoiesis is the biological process of birth, replication and differentiation of blood cells. Despite of being essentially a stochastic phenomenon followed by a huge number of discrete entities, blood formation has naturally an associated continuous dynamics, because the cellular populations can - on average - easily be described by (e.g.) differential equations. This deterministic dynamics by no means contemplates some important stochastic aspects related to abnormal hematopoiesis, that are especially significant for studying certain blood cancer deceases. For instance, by mere stochastic competition against the normal cells, leukemic cells sometimes do not reach the population thereshold needed to kill the organism. Of course, a pure discrete model able to follow the stochastic paths of billons of cells is computationally impossible. In order to avoid this difficulty, we seek a trade-off between the computationally feasible and the biologically realistic, deriving an equation able to size conveniently both the discrete and continuous parts of a model for hematopoiesis in terrestrial mammals, in the context of Chronic Myeloid Leukemia. Assuming the cancer is originated from a single stem cell inside of the bone marrow, we also deduce a theoretical formula for the probability of non-diagnosis as a function of the mammal average adult mass. In addition, this work cellular dynamics analysis may shed light on understanding Peto's paradox, which is shown here as an emergent property of the discrete-continuous nature of the system. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Modeling small cell lung cancer (SCLC) biology through deterministic and stochastic mathematical models.

    PubMed

    Salgia, Ravi; Mambetsariev, Isa; Hewelt, Blake; Achuthan, Srisairam; Li, Haiqing; Poroyko, Valeriy; Wang, Yingyu; Sattler, Martin

    2018-05-25

    Mathematical cancer models are immensely powerful tools that are based in part on the fractal nature of biological structures, such as the geometry of the lung. Cancers of the lung provide an opportune model to develop and apply algorithms that capture changes and disease phenotypes. We reviewed mathematical models that have been developed for biological sciences and applied them in the context of small cell lung cancer (SCLC) growth, mutational heterogeneity, and mechanisms of metastasis. The ultimate goal is to develop the stochastic and deterministic nature of this disease, to link this comprehensive set of tools back to its fractalness and to provide a platform for accurate biomarker development. These techniques may be particularly useful in the context of drug development research, such as combination with existing omics approaches. The integration of these tools will be important to further understand the biology of SCLC and ultimately develop novel therapeutics.

  17. A stochastic spatiotemporal model of a response-regulator network in the Caulobacter crescentus cell cycle

    NASA Astrophysics Data System (ADS)

    Li, Fei; Subramanian, Kartik; Chen, Minghan; Tyson, John J.; Cao, Yang

    2016-06-01

    The asymmetric cell division cycle in Caulobacter crescentus is controlled by an elaborate molecular mechanism governing the production, activation and spatial localization of a host of interacting proteins. In previous work, we proposed a deterministic mathematical model for the spatiotemporal dynamics of six major regulatory proteins. In this paper, we study a stochastic version of the model, which takes into account molecular fluctuations of these regulatory proteins in space and time during early stages of the cell cycle of wild-type Caulobacter cells. We test the stochastic model with regard to experimental observations of increased variability of cycle time in cells depleted of the divJ gene product. The deterministic model predicts that overexpression of the divK gene blocks cell cycle progression in the stalked stage; however, stochastic simulations suggest that a small fraction of the mutants cells do complete the cell cycle normally.

  18. Field-free deterministic ultrafast creation of magnetic skyrmions by spin-orbit torques

    NASA Astrophysics Data System (ADS)

    Büttner, Felix; Lemesh, Ivan; Schneider, Michael; Pfau, Bastian; Günther, Christian M.; Hessing, Piet; Geilhufe, Jan; Caretta, Lucas; Engel, Dieter; Krüger, Benjamin; Viefhaus, Jens; Eisebitt, Stefan; Beach, Geoffrey S. D.

    2017-11-01

    Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii-Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin-orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin-orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin-orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin-orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.

  19. A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeck, Wim; Parsons, Donald Kent; White, Morgan Curtis

    Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in themore » details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.« less

  20. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops

    NASA Astrophysics Data System (ADS)

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  1. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.

    PubMed

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  2. Deterministic separation of cancer cells from blood at 10 mL/min

    NASA Astrophysics Data System (ADS)

    Loutherback, Kevin; D'Silva, Joseph; Liu, Liyu; Wu, Amy; Austin, Robert H.; Sturm, James C.

    2012-12-01

    Circulating tumor cells (CTCs) and circulating clusters of cancer and stromal cells have been identified in the blood of patients with malignant cancer and can be used as a diagnostic for disease severity, assess the efficacy of different treatment strategies and possibly determine the eventual location of metastatic invasions for possible treatment. There is thus a critical need to isolate, propagate and characterize viable CTCs and clusters of cancer cells with their associated stroma cells. Here, we present a microfluidic device for mL/min flow rate, continuous-flow capture of viable CTCs from blood using deterministic lateral displacement (DLD) arrays. We show here that a DLD array device can isolate CTCs from blood with capture efficiency greater than 85% CTCs at volumetric flow rates of up to 10 mL/min with no effect on cell viability.

  3. Amplification of intrinsic fluctuations by the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Fox, Ronald F.; Elston, T. C.

    1993-07-01

    Macroscopic systems (e.g., hydrodynamics, chemical reactions, electrical circuits, etc.) manifest intrinsic fluctuations of molecular and thermal origin. When the macroscopic dynamics is deterministically chaotic, the intrinsic fluctuations may become amplified by several orders of magnitude. Numerical studies of this phenomenon are presented in detail for the Lorenz model. Amplification to macroscopic scales is exhibited, and quantitative methods (binning and a difference-norm) are presented for measuring macroscopically subliminal amplification effects. In order to test the quality of the numerical results, noise induced chaos is studied around a deterministically nonchaotic state, where the scaling law relating the Lyapunov exponent to noise strength obtained for maps is confirmed for the Lorenz model, a system of ordinary differential equations.

  4. Uniqueness of Nash equilibrium in vaccination games.

    PubMed

    Bai, Fan

    2016-12-01

    One crucial condition for the uniqueness of Nash equilibrium set in vaccination games is that the attack ratio monotonically decreases as the vaccine coverage level increasing. We consider several deterministic vaccination models in homogeneous mixing population and in heterogeneous mixing population. Based on the final size relations obtained from the deterministic epidemic models, we prove that the attack ratios can be expressed in terms of the vaccine coverage levels, and also prove that the attack ratios are decreasing functions of vaccine coverage levels. Some thresholds are presented, which depend on the vaccine efficacy. It is proved that for vaccination games in homogeneous mixing population, there is a unique Nash equilibrium for each game.

  5. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  6. The Constitutive Modeling of Thin Films with Randon Material Wrinkles

    NASA Technical Reports Server (NTRS)

    Murphey, Thomas W.; Mikulas, Martin M.

    2001-01-01

    Material wrinkles drastically alter the structural constitutive properties of thin films. Normally linear elastic materials, when wrinkled, become highly nonlinear and initially inelastic. Stiffness' reduced by 99% and negative Poisson's ratios are typically observed. This paper presents an effective continuum constitutive model for the elastic effects of material wrinkles in thin films. The model considers general two-dimensional stress and strain states (simultaneous bi-axial and shear stress/strain) and neglects out of plane bending. The constitutive model is derived from a traditional mechanics analysis of an idealized physical model of random material wrinkles. Model parameters are the directly measurable wrinkle characteristics of amplitude and wavelength. For these reasons, the equations are mechanistic and deterministic. The model is compared with bi-axial tensile test data for wrinkled Kaptong(Registered Trademark) HN and is shown to deterministically predict strain as a function of stress with an average RMS error of 22%. On average, fitting the model to test data yields an RMS error of 1.2%

  7. INTEGRATED PLANNING MODEL - EPA APPLICATIONS

    EPA Science Inventory

    The Integrated Planning Model (IPM) is a multi-regional, dynamic, deterministic linear programming (LP) model of the electric power sector in the continental lower 48 states and the District of Columbia. It provides forecasts up to year 2050 of least-cost capacity expansion, elec...

  8. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  9. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks.

    PubMed

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.

  10. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks

    PubMed Central

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110

  11. The forecasting research of early warning systems for atmospheric pollutants: A case in Yangtze River Delta region

    NASA Astrophysics Data System (ADS)

    Song, Yiliao; Qin, Shanshan; Qu, Jiansheng; Liu, Feng

    2015-10-01

    The issue of air quality regarding PM pollution levels in China is a focus of public attention. To address that issue, to date, a series of studies is in progress, including PM monitoring programs, PM source apportionment, and the enactment of new ambient air quality index standards. However, related research concerning computer modeling for PM future trends estimation is rare, despite its significance to forecasting and early warning systems. Thereby, a study regarding deterministic and interval forecasts of PM is performed. In this study, data on hourly and 12 h-averaged air pollutants are applied to forecast PM concentrations within the Yangtze River Delta (YRD) region of China. The characteristics of PM emissions have been primarily examined and analyzed using different distribution functions. To improve the distribution fitting that is crucial for estimating PM levels, an artificial intelligence algorithm is incorporated to select the optimal parameters. Following that step, an ANF model is used to conduct deterministic forecasts of PM. With the identified distributions and deterministic forecasts, different levels of PM intervals are estimated. The results indicate that the lognormal or gamma distributions are highly representative of the recorded PM data with a goodness-of-fit R2 of approximately 0.998. Furthermore, the results of the evaluation metrics (MSE, MAPE and CP, AW) also show high accuracy within the deterministic and interval forecasts of PM, indicating that this method enables the informative and effective quantification of future PM trends.

  12. Two Different Template Replicators Coexisting in the Same Protocell: Stochastic Simulation of an Extended Chemoton Model

    PubMed Central

    Zachar, István; Fedor, Anna; Szathmáry, Eörs

    2011-01-01

    The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix. PMID:21818258

  13. Two different template replicators coexisting in the same protocell: stochastic simulation of an extended chemoton model.

    PubMed

    Zachar, István; Fedor, Anna; Szathmáry, Eörs

    2011-01-01

    The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix.

  14. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture.

    PubMed

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-22

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  15. General implementation of arbitrary nonlinear quadrature phase gates

    NASA Astrophysics Data System (ADS)

    Marek, Petr; Filip, Radim; Ogawa, Hisashi; Sakaguchi, Atsushi; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We propose general methodology of deterministic single-mode quantum interaction nonlinearly modifying single quadrature variable of a continuous-variable system. The methodology is based on linear coupling of the system to ancillary systems subsequently measured by quadrature detectors. The nonlinear interaction is obtained by using the data from the quadrature detection for dynamical manipulation of the coupling parameters. This measurement-induced methodology enables direct realization of arbitrary nonlinear quadrature interactions without the need to construct them from the lowest-order gates. Such nonlinear interactions are crucial for more practical and efficient manipulation of continuous quadrature variables as well as qubits encoded in continuous-variable systems.

  16. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture

    NASA Astrophysics Data System (ADS)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-01

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  17. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USDA-ARS?s Scientific Manuscript database

    The predictions of most terrestrial ecosystem models originate from deterministic simulations. Relatively few uncertainty evaluation exercises in model outputs are performed by either model developers or users. This issue has important consequences for decision makers who rely on models to develop n...

  18. A random walk on water (Henry Darcy Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2009-04-01

    Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?

  19. Immersion freezing of internally and externally mixed mineral dust species analyzed by stochastic and deterministic models

    NASA Astrophysics Data System (ADS)

    Wong, B.; Kilthau, W.; Knopf, D. A.

    2017-12-01

    Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (J­­het), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.

  20. Transient deterministic shallow landslide modeling: Requirements for susceptibility and hazard assessments in a GIS framework

    USGS Publications Warehouse

    Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.

    2008-01-01

    Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.

  1. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  2. Robust Planning for Effects-Based Operations

    DTIC Science & Technology

    2006-06-01

    Algorithm ......................................... 34 2.6 Robust Optimization Literature ..................................... 36 2.6.1 Protecting Against...Model Formulation ...................... 55 3.1.5 Deterministic EBO Model Example and Performance ............. 59 3.1.6 Greedy Algorithm ...111 4.1.9 Conclusions on Robust EBO Model Performance .................... 116 4.2 Greedy Algorithm versus EBO Models

  3. Application of Wavelet Filters in an Evaluation of Photochemical Model Performance

    EPA Science Inventory

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...

  4. Tight-binding chains with off-diagonal disorder: Bands of extended electronic states induced by minimal quasi-one-dimensionality

    NASA Astrophysics Data System (ADS)

    Nandy, Atanu; Pal, Biplab; Chakrabarti, Arunava

    2016-08-01

    It is shown that an entire class of off-diagonally disordered linear lattices composed of two basic building blocks and described within a tight-binding model can be tailored to generate absolutely continuous energy bands. It can be achieved if linear atomic clusters of an appropriate size are side-coupled to a suitable subset of sites in the backbone, and if the nearest-neighbor hopping integrals, in the backbone and in the side-coupled cluster, bear a certain ratio. We work out the precise relationship between the number of atoms in one of the building blocks in the backbone and that in the side attachment. In addition, we also evaluate the definite correlation between the numerical values of the hopping integrals at different subsections of the chain, that can convert an otherwise point spectrum (or a singular continuous one for deterministically disordered lattices) with exponentially (or power law) localized eigenfunctions to an absolutely continuous spectrum comprising one or more bands (subbands) populated by extended, totally transparent eigenstates. The results, which are analytically exact, put forward a non-trivial variation of the Anderson localization (Anderson P. W., Phys. Rev., 109 (1958) 1492), pointing towards its unusual sensitivity to the numerical values of the system parameters and, go well beyond the other related models such as the Random Dimer Model (RDM) (Dunlap D. H. et al., Phys. Rev. Lett., 65 (1990) 88).

  5. Deep Unfolding for Topic Models.

    PubMed

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  6. FACTORS INFLUENCING TOTAL DIETARY EXPOSURES OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify the critical input parameters needed to assess dietary intakes of young children. The model was used as a framework for understanding the important factors in data collection and data analysis. Factors incorporated into the model i...

  7. Why Are You Wearing a Watch? Complicating the Narrative of Economic and Social Progress in Britain with Year 9

    ERIC Educational Resources Information Center

    Sibona, Hannah

    2017-01-01

    Frustrated by the traditional narrative of the industrial revolution as a steady march of progress, and disappointed by her students' dull and deterministic statements about historical change, Hannah Sibona decided to complicate the tidy narrative of continual improvement. Inspired by an article by E.P. Thompson, Sibona reflected that introducing…

  8. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  9. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  10. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  11. Relation Between the Cell Volume and the Cell Cycle Dynamics in Mammalian cell

    NASA Astrophysics Data System (ADS)

    Magno, A. C. G.; Oliveira, I. L.; Hauck, J. V. S.

    2016-08-01

    The main goal of this work is to add and analyze an equation that represents the volume in a dynamical model of the mammalian cell cycle proposed by Gérard and Goldbeter (2011) [1]. The cell division occurs when the cyclinB/Cdkl complex is totally degraded (Tyson and Novak, 2011)[2] and it reaches a minimum value. At this point, the cell is divided into two newborn daughter cells and each one will contain the half of the cytoplasmic content of the mother cell. The equations of our base model are only valid if the cell volume, where the reactions occur, is constant. Whether the cell volume is not constant, that is, the rate of change of its volume with respect to time is explicitly taken into account in the mathematical model, then the equations of the original model are no longer valid. Therefore, every equations were modified from the mass conservation principle for considering a volume that changes with time. Through this approach, the cell volume affects all model variables. Two different dynamic simulation methods were accomplished: deterministic and stochastic. In the stochastic simulation, the volume affects every model's parameters which have molar unit, whereas in the deterministic one, it is incorporated into the differential equations. In deterministic simulation, the biochemical species may be in concentration units, while in stochastic simulation such species must be converted to number of molecules which are directly proportional to the cell volume. In an effort to understand the influence of the new equation a stability analysis was performed. This elucidates how the growth factor impacts the stability of the model's limit cycles. In conclusion, a more precise model, in comparison to the base model, was created for the cell cycle as it now takes into consideration the cell volume variation

  12. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  13. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  14. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  15. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.

  16. Long-Term Cost-Effectiveness of Transanal Irrigation in Patients with Neurogenic Bowel Dysfunction.

    PubMed

    Emmanuel, Anton; Kumar, Gayathri; Christensen, Peter; Mealing, Stuart; Størling, Zenia M; Andersen, Frederikke; Kirshblum, Steven

    2016-01-01

    People suffering from neurogenic bowel dysfunction (NBD) and an ineffective bowel regimen often suffer from fecal incontinence (FI) and related symptoms, which have a huge impact on their quality of life. In these situations, transanal irrigation (TAI) has been shown to reduce these symptoms and improve quality of life. To investigate the long-term cost-effectiveness of initiating TAI in patients with NBD who have failed standard bowel care (SBC). A deterministic Markov decision model was developed to project the lifetime health economic outcomes, including quality-adjusted life years (QALYs), episodes of FI, urinary tract infections (UTIs), and stoma surgery when initiating TAI relative to continuing SBC. A data set consisting of 227 patients with NBD due to spinal cord injury (SCI), multiple sclerosis, spina bifida and cauda equina syndrome was used in the analysis. In the model a 30-year old individual with SCI was used as a base-case. A probabilistic sensitivity analysis was applied to evaluate the robustness of the model. The model predicts that a 30-year old SCI patient with a life expectancy of 37 years initiating TAI will experience a 36% reduction in FI episodes, a 29% reduction in UTIs, a 35% reduction in likelihood of stoma surgery and a 0.4 improvement in QALYs, compared with patients continuing SBC. A lifetime cost-saving of £21,768 per patient was estimated for TAI versus continuing SBC alone. TAI is a cost-saving treatment strategy reducing risk of stoma surgery, UTIs, episodes of FI and improving QALYs for NBD patients who have failed SBC.

  17. Langevin approach to a chemical wave front: Selection of the propagation velocity in the presence of internal noise

    NASA Astrophysics Data System (ADS)

    Lemarchand, A.; Lesne, A.; Mareschal, M.

    1995-05-01

    The reaction-diffusion equation associated with the Fisher chemical model A+B-->2A admits wave-front solutions by replacing an unstable stationary state with a stable one. The deterministic analysis concludes that their propagation velocity is not prescribed by the dynamics. For a large class of initial conditions the velocity which is spontaneously selected is equal to the minimum allowed velocity vmin, as predicted by the marginal stability criterion. In order to test the relevance of this deterministic description we investigate the macroscopic consequences, on the velocity and the width of the front, of the intrinsic stochasticity due to the underlying microscopic dynamics. We solve numerically the Langevin equations, deduced analytically from the master equation within a system size expansion procedure. We show that the mean profile associated with the stochastic solution propagates faster than the deterministic solution at a velocity up to 25% greater than vmin.

  18. Optimal design for robust control of uncertain flexible joint manipulators: a fuzzy dynamical system approach

    NASA Astrophysics Data System (ADS)

    Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang

    2018-04-01

    A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.

  19. Time-series modeling and prediction of global monthly absolute temperature for environmental decision making

    NASA Astrophysics Data System (ADS)

    Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun

    2013-03-01

    A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.

  20. ASSESSMENT OF TWO PHYSICALLY BASED WATERSHED MODELS BASED ON THEIR PERFORMANCES OF SIMULATING SEDIMENT MOVEMENT OVER SMALL WATERSHEDS

    EPA Science Inventory


    Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...

  1. Identifiability Of Systems With Modeling Errors

    NASA Technical Reports Server (NTRS)

    Hadaegh, Yadolah " fred" ; Bekey, George A.

    1988-01-01

    Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.

  2. PARADIGM USING JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY STOCHASTIC DESCRIPTION AS A TEMPLATE FOR MODEL EVALUATION

    EPA Science Inventory

    The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...

  3. Adequacy assessment of composite generation and transmission systems incorporating wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Gao, Yi

    The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.

  4. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  5. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis

    USGS Publications Warehouse

    Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.

    2009-01-01

    An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.

  6. A stochastic flow-capturing model to optimize the location of fast-charging stations with uncertain electric vehicle flows

    DOE PAGES

    Wu, Fei; Sioshansi, Ramteen

    2017-05-04

    Here, we develop a model to optimize the location of public fast charging stations for electric vehicles (EVs). A difficulty in planning the placement of charging stations is uncertainty in where EV charging demands appear. For this reason, we use a stochastic flow-capturing location model (SFCLM). A sample-average approximation method and an averaged two-replication procedure are used to solve the problem and estimate the solution quality. We demonstrate the use of the SFCLM using a Central-Ohio based case study. We find that most of the stations built are concentrated around the urban core of the region. As the number ofmore » stations built increases, some appear on the outskirts of the region to provide an extended charging network. We find that the sets of optimal charging station locations as a function of the number of stations built are approximately nested. We demonstrate the benefits of the charging-station network in terms of how many EVs are able to complete their daily trips by charging midday—six public charging stations allow at least 60% of EVs that would otherwise not be able to complete their daily tours without the stations to do so. We finally compare the SFCLM to a deterministic model, in which EV flows are set equal to their expected values. We show that if a limited number of charging stations are to be built, the SFCLM outperforms the deterministic model. As the number of stations to be built increases, the SFCLM and deterministic model select very similar station locations.« less

  7. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  8. Comparison of two optimization algorithms for fuzzy finite element model updating for damage detection in a wind turbine blade

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2018-03-01

    vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.

  9. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  10. On the limits of probabilistic forecasting in nonlinear time series analysis II: Differential entropy.

    PubMed

    Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki

    2017-08-01

    In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.

  11. Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations

    NASA Astrophysics Data System (ADS)

    Savran, William Harvey

    High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.

  12. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  13. Dynamic speckle - Interferometry of micro-displacements

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. P.

    2012-06-01

    The problem of the dynamics of speckles in the image plane of the object, caused by random movements of scattering centers is solved. We consider three cases: 1) during the observation the points move at random, but constant speeds, and 2) the relative displacement of any pair of points is a continuous random process, and 3) the motion of the centers is the sum of a deterministic movement and random displacement. For the cases 1) and 2) the characteristics of temporal and spectral autocorrelation function of the radiation intensity can be used for determining of individually and the average relative displacement of the centers, their dispersion and the relaxation time. For the case 3) is showed that under certain conditions, the optical signal contains a periodic component, the number of periods is proportional to the derivations of the deterministic displacements. The results of experiments conducted to test and application of theory are given.

  14. Deterministically Entangling Two Remote Atomic Ensembles via Light-Atom Mixed Entanglement Swapping

    PubMed Central

    Liu, Yanhong; Yan, Zhihui; Jia, Xiaojun; Xie, Changde

    2016-01-01

    Entanglement of two distant macroscopic objects is a key element for implementing large-scale quantum networks consisting of quantum channels and quantum nodes. Entanglement swapping can entangle two spatially separated quantum systems without direct interaction. Here we propose a scheme of deterministically entangling two remote atomic ensembles via continuous-variable entanglement swapping between two independent quantum systems involving light and atoms. Each of two stationary atomic ensembles placed at two remote nodes in a quantum network is prepared to a mixed entangled state of light and atoms respectively. Then, the entanglement swapping is unconditionally implemented between the two prepared quantum systems by means of the balanced homodyne detection of light and the feedback of the measured results. Finally, the established entanglement between two macroscopic atomic ensembles is verified by the inseparability criterion of correlation variances between two anti-Stokes optical beams respectively coming from the two atomic ensembles. PMID:27165122

  15. Uncertainty in clinical data and stochastic model for in vitro fertilization.

    PubMed

    Yenkie, Kirti M; Diwekar, Urmila

    2015-02-21

    In vitro fertilization (IVF) is the most widely used technique in assisted reproductive technologies (ART). It has been divided into four stages; (i) superovulation, (ii) egg retrieval, (iii) insemination/fertilization and (iv) embryo transfer. The first stage of superovulation is a drug induced method to enable multiple ovulation, i.e., multiple follicle growth to oocytes or matured follicles in a single menstrual cycle. IVF being a medical procedure that aims at manipulating the biological functions in the human body is subjected to inherent sources of uncertainty and variability. Also, the interplay of hormones with the natural functioning of the ovaries to stimulate multiple ovulation as against single ovulation in a normal menstrual cycle makes the procedure dependent on several factors like the patient's condition in terms of cause of infertility, actual ovarian function, responsiveness to the medication, etc. The treatment requires continuous monitoring and testing and this can give rise to errors in observations and reports. These uncertainties are present in the form of measurement noise in the clinical data. Thus, it becomes essential to look at the process noise and account for it to build better representative models for follicle growth. The purpose of this work is to come up with a robust model which can project the superovulation cycle outcome based on the hormonal doses and patient response in a better way in presence of uncertainty. The stochastic model results in better projection of the cycle outcomes for the patients where the deterministic model has some deviations from the clinical observations and the growth term value is not within the range of '0.3-0.6'. It was found that the prediction accuracy was enhanced by more than 70% for two patients by using the stochastic model projections. Also, in patients where the prediction accuracy did not increase significantly, a better match with the trend of the clinical data was observed in case of the stochastic model projections as compared to their deterministic counterparts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Equation-free analysis of agent-based models and systematic parameter determination

    NASA Astrophysics Data System (ADS)

    Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.

    2016-12-01

    Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for configuring the system specific EF parameters.

  17. A theoretical and experimental study of turbulent nonevaporating sprays

    NASA Technical Reports Server (NTRS)

    Solomon, A. S. P.; Shuen, J. S.; Zhang, Q. F.; Faeth, G. M.

    1984-01-01

    Measurements and analysis limited to the dilute portions of turbulent nonevaporating sprays injected into a still air environment were completed. Mean and fluctuating velocities and Reynolds stress were measured in the continuous phase. Liquid phase measurements included liquid mass fluxes, drop sizes and drop size and velocity correlation. Initial conditions needed for model evaluation were measured at a location as close to the injector exit as possible. The test sprays showed significant effects of slip and turbulent dispersion of the discrete phase. The measurements were used to evaluate three typical models of these processes: (1) a locally homogenous flow (LHF) model, where slip between the phases were neglected; (2) a deterministic separated flow (DSF) model, where slip was considered but effects of drop dispersion by turbulence were ignored; and (3) a stochastic separated flow (SSF) model, where effects of interphase slip and turbulent dispersion were considered using random-walk computations for drop motion. The LHF and DSF models did not provide very satisfactory predictions for the present measurements. In contrast, the SSF model performed reasonably well with no modifications in the prescription of eddy properties from its original calibration. Some effects of drops on turbulence properties were observed near the dense regions of the sprays.

  18. Stochastic and deterministic causes of streamer branching in liquid dielectrics

    NASA Astrophysics Data System (ADS)

    Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl

    2013-08-01

    Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.

  19. Evidence for determinism in species diversification and contingency in phenotypic evolution during adaptive radiation.

    PubMed

    Burbrink, Frank T; Chen, Xin; Myers, Edward A; Brandley, Matthew C; Pyron, R Alexander

    2012-12-07

    Adaptive radiation (AR) theory predicts that groups sharing the same source of ecological opportunity (EO) will experience deterministic species diversification and morphological evolution. Thus, deterministic ecological and morphological evolution should be correlated with deterministic patterns in the tempo and mode of speciation for groups in similar habitats and time periods. We test this hypothesis using well-sampled phylogenies of four squamate groups that colonized the New World (NW) in the Late Oligocene. We use both standard and coalescent models to assess species diversification, as well as likelihood models to examine morphological evolution. All squamate groups show similar early pulses of speciation, as well as diversity-dependent ecological limits on clade size at a continental scale. In contrast, processes of morphological evolution are not easily predictable and do not show similar pulses of early and rapid change. Patterns of morphological and species diversification thus appear uncoupled across these groups. This indicates that the processes that drive diversification and disparification are not mechanistically linked, even among similar groups of taxa experiencing the same sources of EO. It also suggests that processes of phenotypic diversification cannot be predicted solely from the existence of an AR or knowledge of the process of diversification.

  20. Automated Flight Routing Using Stochastic Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  1. Evidence for determinism in species diversification and contingency in phenotypic evolution during adaptive radiation

    PubMed Central

    Burbrink, Frank T.; Chen, Xin; Myers, Edward A.; Brandley, Matthew C.; Pyron, R. Alexander

    2012-01-01

    Adaptive radiation (AR) theory predicts that groups sharing the same source of ecological opportunity (EO) will experience deterministic species diversification and morphological evolution. Thus, deterministic ecological and morphological evolution should be correlated with deterministic patterns in the tempo and mode of speciation for groups in similar habitats and time periods. We test this hypothesis using well-sampled phylogenies of four squamate groups that colonized the New World (NW) in the Late Oligocene. We use both standard and coalescent models to assess species diversification, as well as likelihood models to examine morphological evolution. All squamate groups show similar early pulses of speciation, as well as diversity-dependent ecological limits on clade size at a continental scale. In contrast, processes of morphological evolution are not easily predictable and do not show similar pulses of early and rapid change. Patterns of morphological and species diversification thus appear uncoupled across these groups. This indicates that the processes that drive diversification and disparification are not mechanistically linked, even among similar groups of taxa experiencing the same sources of EO. It also suggests that processes of phenotypic diversification cannot be predicted solely from the existence of an AR or knowledge of the process of diversification. PMID:23034709

  2. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  3. Matrix Population Model for Estimating Effects from Time-Varying Aquatic Exposures: Technical Documentation

    EPA Science Inventory

    The Office of Pesticide Programs models daily aquatic pesticide exposure values for 30 years in its risk assessments. However, only a fraction of that information is typically used in these assessments. The population model employed herein is a deterministic, density-dependent pe...

  4. Gulf of Mexico dissolved oxygen model (GoMDOM) research and quality assurance project plan

    EPA Science Inventory

    An integrated high resolution mathematical modeling framework is being developed that will link hydrodynamic, atmospheric, and water quality models for the northern Gulf of Mexico. This Research and Quality Assurance Project Plan primarily focuses on the deterministic Gulf of Me...

  5. A combinatorial model of malware diffusion via bluetooth connections.

    PubMed

    Merler, Stefano; Jurman, Giuseppe

    2013-01-01

    We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression.

  6. Chaotic Lagrangian models for turbulent relative dispersion.

    PubMed

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  7. Chaotic Lagrangian models for turbulent relative dispersion

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  8. The Shock and Vibration Digest. Volume 15, Number 8

    DTIC Science & Technology

    1983-08-01

    a number of cracks have occurred in rotor shafts of turbogenerator sys - tems. Methods for detecting such cracks have thus become important, and...Bearing-Foundation Sys - tems Caused by Electrical System Faults," IFTOMM, p 177. 95. Ming, H., Sgroi, V., and Malanoski, S.B., "Fan/ Foundation...vibra- tion fundamentals, deterministic and random signals, convolution integrals, wave motion, continuous sys - tems, sound propagation outdoors

  9. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  10. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  11. Low-frequency fluctuations in vertical cavity lasers: Experiments versus Lang-Kobayashi dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcini, Alessandro; Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, via Sansone 1, 50019 Sesto Fiorentino; Barland, Stephane

    2006-12-15

    The limits of applicability of the Lang-Kobayashi (LK) model for a semiconductor laser with optical feedback are analyzed. The model equations, equipped with realistic values of the parameters, are investigated below the solitary laser threshold where low-frequency fluctuations (LFF's) are usually observed. The numerical findings are compared with experimental data obtained for the selected polarization mode from a vertical cavity surface emitting laser (VCSEL) subject to polarization selective external feedback. The comparison reveals the bounds within which the dynamics of the LK model can be considered as realistic. In particular, it clearly demonstrates that the deterministic LK model, for realisticmore » values of the linewidth enhancement factor {alpha}, reproduces the LFF's only as a transient dynamics towards one of the stationary modes with maximal gain. A reasonable reproduction of real data from VCSEL's can be obtained only by considering the noisy LK or alternatively deterministic LK model for extremely high {alpha} values.« less

  12. Stochastic modelling of slow-progressing tumors: Analysis and applications to the cell interplay and control of low grade gliomas

    NASA Astrophysics Data System (ADS)

    Rodríguez, Clara Rojas; Fernández Calvo, Gabriel; Ramis-Conde, Ignacio; Belmonte-Beitia, Juan

    2017-08-01

    Tumor-normal cell interplay defines the course of a neoplastic malignancy. The outcome of this dual relation is the ultimate prevailing of one of the cells and the death or retreat of the other. In this paper we study the mathematical principles that underlay one important scenario: that of slow-progressing cancers. For this, we develop, within a stochastic framework, a mathematical model to account for tumor-normal cell interaction in such a clinically relevant situation and derive a number of deterministic approximations from the stochastic model. We consider in detail the existence and uniqueness of the solutions of the deterministic model and study the stability analysis. We then focus our model to the specific case of low grade gliomas, where we introduce an optimal control problem for different objective functionals under the administration of chemotherapy. We derive the conditions for which singular and bang-bang control exist and calculate the optimal control and states.

  13. FACTORS INFLUENCING TOTAL DIETARY EXPOSURE OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify critical input parameters to assess dietary intake of young children. The model was used as a framework for understanding important factors in data collection and analysis. Factors incorporated included transfer efficiencies of pest...

  14. Field Evaluation of an Avian Risk Assessment Model

    EPA Science Inventory

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in ...

  15. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  16. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  17. The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation

    ERIC Educational Resources Information Center

    Brown, Scott D.; Heathcote, Andrew

    2008-01-01

    We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…

  18. Identifying influences on model uncertainty: an application using a forest carbon budget model

    Treesearch

    James E. Smith; Linda S. Heath

    2001-01-01

    Uncertainty is an important consideration for both developers and users of environmental simulation models. Establishing quantitative estimates of uncertainty for deterministic models can be difficult when the underlying bases for such information are scarce. We demonstrate an application of probabilistic uncertainty analysis that provides for refinements in...

  19. Application of Game Theory to Improve the Defense of the Smart Grid

    DTIC Science & Technology

    2012-03-01

    Computer Systems and Networks ...............................................22 2.4.2 Trust Models ...systems. In this environment, developers assumed deterministic communications mediums rather than the “best effort” models provided in most modern... models or computational models to validate the SPSs design. Finally, the study reveals concerns about the performance of load rejection schemes

  20. Scaling in the Donangelo-Sneppen model for evolution of money

    NASA Astrophysics Data System (ADS)

    Stauffer, Dietrich; P. Radomski, Jan

    2001-03-01

    The evolution of money from unsuccessful barter attempts, as modeled by Donangelo and Sneppen, is modified by a deterministic instead of a probabilistic selection of the most desired product as money. We check in particular the characteristic times of the model as a function of system size.

  1. A Combinatorial Model of Malware Diffusion via Bluetooth Connections

    PubMed Central

    Merler, Stefano; Jurman, Giuseppe

    2013-01-01

    We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression. PMID:23555677

  2. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  3. A General Cognitive Diagnosis Model for Expert-Defined Polytomous Attributes

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy

    2013-01-01

    Polytomous attributes, particularly those defined as part of the test development process, can provide additional diagnostic information. The present research proposes the polytomous generalized deterministic inputs, noisy, "and" gate (pG-DINA) model to accommodate such attributes. The pG-DINA model allows input from substantive experts…

  4. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    EPA Science Inventory

    Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the f...

  5. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos

    2013-09-05

    The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less

  6. Modeling the spreading of large-scale wildland fires

    Treesearch

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  7. A spatial stochastic programming model for timber and core area management under risk of stand-replacing fire

    Treesearch

    Dung Tuan Nguyen

    2012-01-01

    Forest harvest scheduling has been modeled using deterministic and stochastic programming models. Past models seldom address explicit spatial forest management concerns under the influence of natural disturbances. In this research study, we employ multistage full recourse stochastic programming models to explore the challenges and advantages of building spatial...

  8. Deterministic analysis of processes at corroding metal surfaces and the study of electrochemical noise in these systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latanision, R.M.

    1990-12-01

    Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministicmore » viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.« less

  9. Fumonisin B1 Toxicity in Grower-Finisher Pigs: A Comparative Analysis of Genetically Engineered Bt Corn and non-Bt Corn by Using Quantitative Dietary Exposure Assessment Modeling

    PubMed Central

    Delgado, James E.; Wolt, Jeffrey D.

    2011-01-01

    In this study, we investigate the long-term exposure (20 weeks) to fumonisin B1 (FB1) in grower-finisher pigs by conducting a quantitative exposure assessment (QEA). Our analytical approach involved both deterministic and semi-stochastic modeling for dietary comparative analyses of FB1 exposures originating from genetically engineered Bacillus thuringiensis (Bt)-corn, conventional non-Bt corn and distiller’s dried grains with solubles (DDGS) derived from Bt and/or non-Bt corn. Results from both deterministic and semi-stochastic demonstrated a distinct difference of FB1 toxicity in feed between Bt corn and non-Bt corn. Semi-stochastic results predicted the lowest FB1 exposure for Bt grain with a mean of 1.5 mg FB1/kg diet and the highest FB1 exposure for a diet consisting of non-Bt grain and non-Bt DDGS with a mean of 7.87 mg FB1/kg diet; the chronic toxicological incipient level of concern is 1.0 mg of FB1/kg of diet. Deterministic results closely mirrored but tended to slightly under predict the mean result for the semi-stochastic analysis. This novel comparative QEA model reveals that diet scenarios where the source of grain is derived from Bt corn presents less potential to induce FB1 toxicity than diets containing non-Bt corn. PMID:21909298

  10. Multielevation calibration of frequency-domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.

    2014-01-01

    Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.

  11. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Crump, Alex R.; Resch, Charles T.

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets.more » The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.« less

  12. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  13. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Králik, Juraj, E-mail: juraj.kralik@stuba.sk

    2016-06-08

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  14. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    NASA Astrophysics Data System (ADS)

    Králik, Juraj

    2016-06-01

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  15. How synapses can enhance sensibility of a neural network

    NASA Astrophysics Data System (ADS)

    Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.

    2018-02-01

    In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.

  16. Understanding and Predicting Geomagnetic Dipole Reversals Via Low Dimensional Models and Data Assimilation

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Fournier, A.; Hulot, G.

    2014-12-01

    We investigate the geophysical relevance of low-dimensional models of the geomagnetic dipole fieldby comparing these models to the signed relative paleomagnetic intensity over the past 2 Myr.The comparison is done via Bayesian statistics, implemented numerically by Monte Carlo (MC) sampling.We consider several MC schemes, as well as two data sets to show the robustness of our approach with respect to its numerical implementation and to the details of how the data are collected.The data we consider are the Sint-2000 [1] and PADM2M [2] data sets.We consider three stochastic differential equation (SDE) models and one deterministic model. Experiments with synthetic data show that it is feasible that a low dimensional modelcan learn the geophysical state from data of only the dipole field,and reveal the limitations of the low-dimensional models.For example, the G12 model [3] (a deterministic model that generates dipole reversals by crisis induced intermittency)can only match either one of the two important time scales we find in the data. The MC sampling approach also allows usto use the models to make predictions of the dipole field.We assess how reliably dipole reversals can be predictedwith our approach by hind-casting five reversals documented over the past 2 Myr. We find that, besides its limitations, G12 can be used to predict reversals reliably,however only with short lead times and over short horizons. The scalar SDE models on the other hand are not useful for prediction of dipole reversals.References Valet, J.P., Maynadier,L and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. Ziegler, L.B., Constable, C.G., Johnson, C.L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089. Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137.

  17. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome

    DOE PAGES

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; ...

    2018-04-12

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less

  18. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    NASA Astrophysics Data System (ADS)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  19. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome.

    PubMed

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; Veach, Allison; Ialonardi, Florencia; Iribarne, Oscar; Silliman, Brian

    2018-06-01

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a null model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. Furthermore, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization. © 2018 by the Ecological Society of America.

  20. Nitrogen enrichment suppresses other environmental drivers and homogenizes salt marsh leaf microbiome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleo, Pedro; Alberti, Juan; Jumpponen, Ari

    Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less

  1. A deterministic model of nettle caterpillar life cycle

    NASA Astrophysics Data System (ADS)

    Syukriyah, Y.; Nuraini, N.; Handayani, D.

    2018-03-01

    Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.

  2. Pest persistence and eradication conditions in a deterministic model for sterile insect release.

    PubMed

    Gordillo, Luis F

    2015-01-01

    The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population.

  3. Enhanced Estimation of Terrestrial Loadings for TMDLs: Normalization Approach

    USDA-ARS?s Scientific Manuscript database

    TMDL implementation plans to remediate pathogen-impaired streams are usually based on deterministic terrestrial fate and transport (DTFT) models. A novel protocol is proposed that can effectively, efficiently, and explicitly capture the predictive uncertainty of DTFT models used to establish terres...

  4. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  5. Correlated disorder in the Kuramoto model: Effects on phase coherence, finite-size scaling, and dynamic fluctuations.

    PubMed

    Hong, Hyunsuk; O'Keeffe, Kevin P; Strogatz, Steven H

    2016-10-01

    We consider a mean-field model of coupled phase oscillators with quenched disorder in the natural frequencies and coupling strengths. A fraction p of oscillators are positively coupled, attracting all others, while the remaining fraction 1-p are negatively coupled, repelling all others. The frequencies and couplings are deterministically chosen in a manner which correlates them, thereby correlating the two types of disorder in the model. We first explore the effect of this correlation on the system's phase coherence. We find that there is a critical width γ c in the frequency distribution below which the system spontaneously synchronizes. Moreover, this γ c is independent of p. Hence, our model and the traditional Kuramoto model (recovered when p = 1) have the same critical width γ c . We next explore the critical behavior of the system by examining the finite-size scaling and the dynamic fluctuation of the traditional order parameter. We find that the model belongs to the same universality class as the Kuramoto model with deterministically (not randomly) chosen natural frequencies for the case of p < 1.

  6. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  7. A nonlinear dynamic age-structured model of e-commerce in spain: Stability analysis of the equilibrium by delay and stochastic perturbations

    NASA Astrophysics Data System (ADS)

    Burgos, C.; Cortés, J.-C.; Shaikhet, L.; Villanueva, R.-J.

    2018-11-01

    First, we propose a deterministic age-structured epidemiological model to study the diffusion of e-commerce in Spain. Afterwards, we determine the parameters (death, birth and growth rates) of the underlying demographic model as well as the parameters (transmission of the use of e-commerce rates) of the proposed epidemiological model that best fit real data retrieved from the Spanish National Statistical Institute. Motivated by the two following facts: first the dynamics of acquiring the use of a new technology as e-commerce is mainly driven by the feedback after interacting with our peers (family, friends, mates, mass media, etc.), hence having a certain delay, and second the inherent uncertainty of sampled real data and the social complexity of the phenomena under analysis, we introduce aftereffect and stochastic perturbations in the initial deterministic model. This leads to a delayed stochastic model for e-commerce. We then investigate sufficient conditions in order to guarantee the stability in probability of the equilibrium point of the dynamic e-commerce delayed stochastic model. Our theoretical findings are numerically illustrated using real data.

  8. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  9. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  10. Forecasting transitions in systems with high-dimensional stochastic complex dynamics: a linear stability analysis of the tangled nature model.

    PubMed

    Cairoli, Andrea; Piovani, Duccio; Jensen, Henrik Jeldtoft

    2014-12-31

    We propose a new procedure to monitor and forecast the onset of transitions in high-dimensional complex systems. We describe our procedure by an application to the tangled nature model of evolutionary ecology. The quasistable configurations of the full stochastic dynamics are taken as input for a stability analysis by means of the deterministic mean-field equations. Numerical analysis of the high-dimensional stability matrix allows us to identify unstable directions associated with eigenvalues with a positive real part. The overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean-field approximation is found to be a good early warning of the transitions occurring intermittently.

  11. Modeling stochastic noise in gene regulatory systems

    PubMed Central

    Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung

    2014-01-01

    The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368

  12. Application of Stochastic and Deterministic Approaches to Modeling Interstellar Chemistry

    NASA Astrophysics Data System (ADS)

    Pei, Yezhe

    This work is about simulations of interstellar chemistry using the deterministic rate equation (RE) method and the stochastic moment equation (ME) method. Primordial metal-poor interstellar medium (ISM) is of our interest and the socalled “Population-II” stars could have been formed in this environment during the “Epoch of Reionization” in the baby universe. We build a gas phase model using the RE scheme to describe the ionization-powered interstellar chemistry. We demonstrate that OH replaces CO as the most abundant metal-bearing molecule in such interstellar clouds of the early universe. Grain surface reactions play an important role in the studies of astrochemistry. But the lack of an accurate yet effective simulation method still presents a challenge, especially for large, practical gas-grain system. We develop a hybrid scheme of moment equations and rate equations (HMR) for large gas-grain network to model astrochemical reactions in the interstellar clouds. Specifically, we have used a large chemical gas-grain model, with stochastic moment equations to treat the surface chemistry and deterministic rate equations to treat the gas phase chemistry, to simulate astrochemical systems as of the ISM in the Milky Way, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). We compare the results to those of pure rate equations and modified rate equations and present a discussion about how moment equations improve our theoretical modeling and how the abundances of the assorted species are changed by varied metallicity. We also model the observed composition of H2O, CO and CO2 ices toward Young Stellar Objects in the LMC and show that the HMR method gives a better match to the observation than the pure RE method.

  13. Determining Methane Budgets with Eddy Covariance Data ascertained in a heterogeneous Footprint

    NASA Astrophysics Data System (ADS)

    Rößger, N.; Wille, C.; Kutzbach, L.

    2016-12-01

    Amplified climate change in the Arctic may cause methane emissions to increase considerably due to more suitable production conditions. With a focus on methane, we studied the carbon turnover on the modern flood plain of Samoylov Island situated in the Lena River Delta (72°22'N, 126°28'E) using the eddy covariance data. In contrast to the ice-wedge polygonal tundra on the delta's river terraces, the flood plains have to date received little attention. During the warm season in 2014 and 2015, the mean methane flux amounted to 0.012 μmol m-2 s-1. This average is the result of a large variability in methane fluxes which is attributed to the complexity of the footprint where methane sources are unevenly distributed. Explaining this variability is based on three modelling approaches: a deterministic model using exponential relationships for flux drivers, a multilinear model created through stepwise regression and a neural network which relies on machine learning techniques. A substantial boost in model performance was achieved through inputting footprint information in the form of the contribution of vegetation classes; this indicates the vegetation is serving as an integrated proxy for potential methane flux drivers. The neural network performed best; however, a robust validation revealed that the deterministic model best captured ecosystem-intrinsic features. Furthermore, the deterministic model allowed a downscaling of the net flux by allocating fractions to three vegetation classes which in turn form the basis for upscaling methane fluxes in order to obtain the budget for the entire flood plain. Arctic methane emissions occur in a spatio-temporally complex pattern and employing fine-scale information is crucial to understanding the flux dynamics.

  14. Exploiting Fast-Variables to Understand Population Dynamics and Evolution

    NASA Astrophysics Data System (ADS)

    Constable, George W. A.; McKane, Alan J.

    2018-07-01

    We describe a continuous-time modelling framework for biological population dynamics that accounts for demographic noise. In the spirit of the methodology used by statistical physicists, transitions between the states of the system are caused by individual events while the dynamics are described in terms of the time-evolution of a probability density function. In general, the application of the diffusion approximation still leaves a description that is quite complex. However, in many biological applications one or more of the processes happen slowly relative to the system's other processes, and the dynamics can be approximated as occurring within a slow low-dimensional subspace. We review these time-scale separation arguments and analyse the more simple stochastic dynamics that result in a number of cases. We stress that it is important to retain the demographic noise derived in this way, and emphasise this point by showing that it can alter the direction of selection compared to the prediction made from an analysis of the corresponding deterministic model.

  15. Exploiting Fast-Variables to Understand Population Dynamics and Evolution

    NASA Astrophysics Data System (ADS)

    Constable, George W. A.; McKane, Alan J.

    2017-11-01

    We describe a continuous-time modelling framework for biological population dynamics that accounts for demographic noise. In the spirit of the methodology used by statistical physicists, transitions between the states of the system are caused by individual events while the dynamics are described in terms of the time-evolution of a probability density function. In general, the application of the diffusion approximation still leaves a description that is quite complex. However, in many biological applications one or more of the processes happen slowly relative to the system's other processes, and the dynamics can be approximated as occurring within a slow low-dimensional subspace. We review these time-scale separation arguments and analyse the more simple stochastic dynamics that result in a number of cases. We stress that it is important to retain the demographic noise derived in this way, and emphasise this point by showing that it can alter the direction of selection compared to the prediction made from an analysis of the corresponding deterministic model.

  16. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  17. High dynamic range infrared radiometry and imaging

    NASA Technical Reports Server (NTRS)

    Coon, Darryl D.; Karunasiri, R. P. G.; Bandara, K. M. S. V.

    1988-01-01

    The use is described of cryogenically cooled, extrinsic silicon infrared detectors in an unconventional mode of operation which offers an unusually large dynamic range. The system performs intensity-to-frequency conversion at the focal plane via simple circuits with very low power consumption. The incident IR intensity controls the repetition rate of short duration output pulses over a pulse rate dynamic range of about 10(6). Theory indicates the possibility of monotonic and approx. linear response over the full dynamic range. A comparison between the theoretical and the experimental results shows that the model provides a reasonably good description of experimental data. Some measurements of survivability with a very intense IR source were made on these devices and found to be very encouraging. Evidence continues to indicate that some variations in interpulse time intervals are deterministic rather than probabilistic.

  18. ASP-based method for the enumeration of attractors in non-deterministic synchronous and asynchronous multi-valued networks.

    PubMed

    Ben Abdallah, Emna; Folschette, Maxime; Roux, Olivier; Magnin, Morgan

    2017-01-01

    This paper addresses the problem of finding attractors in biological regulatory networks. We focus here on non-deterministic synchronous and asynchronous multi-valued networks, modeled using automata networks (AN). AN is a general and well-suited formalism to study complex interactions between different components (genes, proteins,...). An attractor is a minimal trap domain, that is, a part of the state-transition graph that cannot be escaped. Such structures are terminal components of the dynamics and take the form of steady states (singleton) or complex compositions of cycles (non-singleton). Studying the effect of a disease or a mutation on an organism requires finding the attractors in the model to understand the long-term behaviors. We present a computational logical method based on answer set programming (ASP) to identify all attractors. Performed without any network reduction, the method can be applied on any dynamical semantics. In this paper, we present the two most widespread non-deterministic semantics: the asynchronous and the synchronous updating modes. The logical approach goes through a complete enumeration of the states of the network in order to find the attractors without the necessity to construct the whole state-transition graph. We realize extensive computational experiments which show good performance and fit the expected theoretical results in the literature. The originality of our approach lies on the exhaustive enumeration of all possible (sets of) states verifying the properties of an attractor thanks to the use of ASP. Our method is applied to non-deterministic semantics in two different schemes (asynchronous and synchronous). The merits of our methods are illustrated by applying them to biological examples of various sizes and comparing the results with some existing approaches. It turns out that our approach succeeds to exhaustively enumerate on a desktop computer, in a large model (100 components), all existing attractors up to a given size (20 states). This size is only limited by memory and computation time.

  19. Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network

    PubMed Central

    Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen

    2015-01-01

    Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. PMID:26714277

  20. Fast mix table construction for material discretization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, S. R.

    2013-07-01

    An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less

  1. Fast Mix Table Construction for Material Discretization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Seth R

    2013-01-01

    An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less

  2. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.

    PubMed

    Kurhekar, Manish; Deshpande, Umesh

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website.

  3. Stochastic population dynamic models as probability networks

    Treesearch

    M.E. and D.C. Lee Borsuk

    2009-01-01

    The dynamics of a population and its response to environmental change depend on the balance of birth, death and age-at-maturity, and there have been many attempts to mathematically model populations based on these characteristics. Historically, most of these models were deterministic, meaning that the results were strictly determined by the equations of the model and...

  4. The Random-Effect DINA Model

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…

  5. Jitter and phase noise of ADPLL due to PSN with deterministic frequency

    NASA Astrophysics Data System (ADS)

    Deng, Xiaoying; Yang, Jun; Wu, Jianhui

    2011-09-01

    In this article, jitter and phase noise of all-digital phase-locked loop due to power supply noise (PSN) with deterministic frequency are analysed. It leads to the conclusion that jitter and phase noise heavily depend on the noise frequency. Compared with jitter, phase noise is much less affected by the deterministic PSN. Our method is utilised to study a CMOS ADPLL designed and simulated in SMIC 0.13 µm standard CMOS process. A comparison between the results obtained by our method and those obtained by simulation and measurement proves the accuracy of the predicted model. When the digital controlled oscillator was corrupted by PSN with 100 mVpk-pk, the measured jitters were 33.9 ps at the rate of fG = 192 MHz and 148.5 ps at the rate of fG = 40 MHz. However, the measured phase noise was exactly the same except for two impulses appearing at 192 and 40 MHz, respectively.

  6. JCCRER Project 2.3 -- Deterministic effects of occupational exposure to radiation. Phase 1: Feasibility study; Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okladnikova, N.; Pesternikova, V.; Sumina, M.

    1998-12-01

    Phase 1 of Project 2.3, a short-term collaborative Feasibility Study, was funded for 12 months starting on 1 February 1996. The overall aim of the study was to determine the practical feasibility of using the dosimetric and clinical data on the MAYAK worker population to study the deterministic effects of exposure to external gamma radiation and to internal alpha radiation from inhaled plutonium. Phase 1 efforts were limited to the period of greatest worker exposure (1948--1954) and focused on collaboratively: assessing the comprehensiveness, availability, quality, and suitability of the Russian clinical and dosimetric data for the study of deterministic effects;more » creating an electronic data base containing complete clinical and dosimetric data on a small, representative sample of MAYAK workers; developing computer software for the testing of a currently used health risk model of hematopoietic effects; and familiarizing the US team with the Russian diagnostic criteria and techniques used in the identification of Chronic Radiation Sickness.« less

  7. How the growth rate of host cells affects cancer risk in a deterministic way

    NASA Astrophysics Data System (ADS)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  8. Deterministic multi-step rotation of magnetic single-domain state in Nickel nanodisks using multiferroic magnetoelastic coupling

    NASA Astrophysics Data System (ADS)

    Sohn, Hyunmin; Liang, Cheng-yen; Nowakowski, Mark E.; Hwang, Yongha; Han, Seungoh; Bokor, Jeffrey; Carman, Gregory P.; Candler, Robert N.

    2017-10-01

    We demonstrate deterministic multi-step rotation of a magnetic single-domain (SD) state in Nickel nanodisks using the multiferroic magnetoelastic effect. Ferromagnetic Nickel nanodisks are fabricated on a piezoelectric Lead Zirconate Titanate (PZT) substrate, surrounded by patterned electrodes. With the application of a voltage between opposing electrode pairs, we generate anisotropic in-plane strains that reshape the magnetic energy landscape of the Nickel disks, reorienting magnetization toward a new easy axis. By applying a series of voltages sequentially to adjacent electrode pairs, circulating in-plane anisotropic strains are applied to the Nickel disks, deterministically rotating a SD state in the Nickel disks by increments of 45°. The rotation of the SD state is numerically predicted by a fully-coupled micromagnetic/elastodynamic finite element analysis (FEA) model, and the predictions are experimentally verified with magnetic force microscopy (MFM). This experimental result will provide a new pathway to develop energy efficient magnetic manipulation techniques at the nanoscale.

  9. Assimilation of lightning data by nudging tropospheric water vapor and applications to numerical forecasts of convective events

    NASA Astrophysics Data System (ADS)

    Dixon, Kenneth

    A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.

  10. Deterministic Creation of Macroscopic Cat States

    PubMed Central

    Lombardo, Daniel; Twamley, Jason

    2015-01-01

    Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157

  11. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  12. Lévy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Boyer, D.; Miramontes, O.; Larralde, H.

    2009-10-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  13. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  14. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bianchini, G.; Burgio, N.; Carta, M.

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Severalmore » off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)« less

  15. Hybrid stochastic simulations of intracellular reaction-diffusion systems.

    PubMed

    Kalantzis, Georgios

    2009-06-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.

  16. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  17. Modelling the stochastic nature of the available coefficient of friction at footwear-floor interfaces.

    PubMed

    Gragg, Jared; Klose, Ellison; Yang, James

    2017-07-01

    The available coefficient of friction (ACOF) is a measure of the friction available between two surfaces, which for human gait would be the footwear-floor interface. It is often compared to the required coefficient of friction (RCOF) to determine the likelihood of a slip in gait. Both the ACOF and RCOF are stochastic by nature meaning that neither should be represented by a deterministic value, such as the sample mean. Previous research has determined that the RCOF can be modelled well by either the normal or lognormal distributions, but previous research aimed at determining an appropriate distribution for the ACOF was inconclusive. This study focuses on modelling the stochastic nature of the ACOF by fitting eight continuous probability distributions to ACOF data for six scenarios. In addition, the data were used to study the effect that a simple housekeeping action such as sweeping could have on the ACOF. Practitioner Summary: Previous research aimed at determining an appropriate distribution for the ACOF was inconclusive. The study addresses this issue as well as looking at the effect that an act such as sweeping has on the ACOF.

  18. Diagnostic Models as Partially Ordered Sets

    ERIC Educational Resources Information Center

    Tatsuoka, Curtis

    2009-01-01

    In this commentary, the author addresses what is referred to as the deterministic input, noisy "and" gate (DINA) model. The author mentions concerns with how this model has been formulated and presented. In particular, the author points out that there is a lack of recognition of the confounding of profiles that generally arises and then discusses…

  19. PLANNING MODELS FOR URBAN WATER SUPPLY EXPANSION. VOLUME 1. PLANNING FOR THE EXPANSION OF REGIONAL WATER SUPPLY SYSTEMS

    EPA Science Inventory

    A three-volume report was developed relative to the modelling of investment strategies for regional water supply planning. Volume 1 is the study of capacity expansion over time. Models to aid decision making for the deterministic case are presented, and a planning process under u...

  20. Equivalency of the DINA Model and a Constrained General Diagnostic Model. Research Report. ETS RR-11-37

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2011-01-01

    This report shows that the deterministic-input noisy-AND (DINA) model is a special case of more general compensatory diagnostic models by means of a reparameterization of the skill space and the design (Q-) matrix of item by skills associations. This reparameterization produces a compensatory model that is equivalent to the (conjunctive) DINA…

  1. An error-dependent model of instrument-scanning behavior in commercial airline pilots. Ph.D. Thesis - May 1983

    NASA Technical Reports Server (NTRS)

    Jones, D. H.

    1985-01-01

    A new flexible model of pilot instrument scanning behavior is presented which assumes that the pilot uses a set of deterministic scanning patterns on the pilot's perception of error in the state of the aircraft, and the pilot's knowledge of the interactive nature of the aircraft's systems. Statistical analyses revealed that a three stage Markov process composed of the pilot's three predicted lookpoints (LP), occurring 1/30, 2/30, and 3/30 of a second prior to each LP, accurately modelled the scanning behavior of 14 commercial airline pilots while flying steep turn maneuvers in a Boeing 737 flight simulator. The modelled scanning data for each pilot were not statistically different from the observed scanning data in comparisons of mean dwell time, entropy, and entropy rate. These findings represent the first direct evidence that pilots are using deterministic scanning patterns during instrument flight. The results are interpreted as direct support for the error dependent model and suggestions are made for further research that could allow for identification of the specific scanning patterns suggested by the model.

  2. Surface drift prediction in the Adriatic Sea using hyper-ensemble statistics on atmospheric, ocean and wave models: Uncertainties and probability distribution areas

    USGS Publications Warehouse

    Rixen, M.; Ferreira-Coelho, E.; Signell, R.

    2008-01-01

    Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).

  3. Predicting the Stochastic Properties of the Shallow Subsurface for Improved Geophysical Modeling

    NASA Astrophysics Data System (ADS)

    Stroujkova, A.; Vynne, J.; Bonner, J.; Lewkowicz, J.

    2005-12-01

    Strong ground motion data from numerous explosive field experiments and from moderate to large earthquakes show significant variations in amplitude and waveform shape with respect to both azimuth and range. Attempts to model these variations using deterministic models have often been unsuccessful. It has been hypothesized that a stochastic description of the geological medium is a more realistic approach. To estimate the stochastic properties of the shallow subsurface, we use Measurement While Drilling (MWD) data, which are routinely collected by mines in order to facilitate design of blast patterns. The parameters, such as rotation speed of the drill, torque, and penetration rate, are used to compute the rock's Specific Energy (SE), which is then related to a blastability index. We use values of SE measured at two different mines and calibrated to laboratory measurements of rock properties to determine correlation lengths of the subsurface rocks in 2D, needed to obtain 2D and 3D stochastic models. The stochastic models are then combined with the deterministic models and used to compute synthetic seismic waveforms.

  4. Use of Remote Sensing and Dust Modelling to Evaluate Ecosystem Phenology and Pollen Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey C.; Sprigg, William A.; Watts, Carol; Shaw, Patrick

    2007-01-01

    The impact of pollen release and downwind concentrations can be evaluated utilizing remote sensing. Previous NASA studies have addressed airborne dust prediction systems PHAiRS (Public Health Applications in Remote Sensing) which have determined that pollen forecasts and simulations are possible. By adapting the deterministic dust model (as an in-line system with the National Weather Service operational forecast model) used in PHAiRS to simulate downwind dispersal of pollen, initializing the model with pollen source regions from MODIS, assessing the results a rapid prototype concept can be produced. We will present the results of our effort to develop a deterministic model for predicting and simulating pollen emission and downwind concentration to study details or phenology and meteorology and their dependencies, and the promise of a credible real time forecast system to support public health and agricultural science and service. Previous studies have been done with PHAiRS research, the use of NASA data, the dust model and the PHAiRS potential to improve public health and environmental services long into the future.

  5. Dynamics of a stochastic multi-strain SIS epidemic model driven by Lévy noise

    NASA Astrophysics Data System (ADS)

    Chen, Can; Kang, Yanmei

    2017-01-01

    A stochastic multi-strain SIS epidemic model is formulated by introducing Lévy noise into the disease transmission rate of each strain. First, we prove that the stochastic model admits a unique global positive solution, and, by the comparison theorem, we show that the solution remains within a positively invariant set almost surely. Next we investigate stochastic stability of the disease-free equilibrium, including stability in probability and pth moment asymptotic stability. Then sufficient conditions for persistence in the mean of the disease are established. Finally, based on an Euler scheme for Lévy-driven stochastic differential equations, numerical simulations for a stochastic two-strain model are carried out to verify the theoretical results. Moreover, numerical comparison results of the stochastic two-strain model and the deterministic version are also given. Lévy noise can cause the two strains to become extinct almost surely, even though there is a dominant strain that persists in the deterministic model. It can be concluded that the introduction of Lévy noise reduces the disease extinction threshold, which indicates that Lévy noise may suppress the disease outbreak.

  6. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  7. Health safety nets can break cycles of poverty and disease: a stochastic ecological model.

    PubMed

    Plucinski, Mateusz M; Ngonghala, Calistus N; Bonds, Matthew H

    2011-12-07

    The persistence of extreme poverty is increasingly attributed to dynamic interactions between biophysical processes and economics, though there remains a dearth of integrated theoretical frameworks that can inform policy. Here, we present a stochastic model of disease-driven poverty traps. Whereas deterministic models can result in poverty traps that can only be broken by substantial external changes to the initial conditions, in the stochastic model there is always some probability that a population will leave or enter a poverty trap. We show that a 'safety net', defined as an externally enforced minimum level of health or economic conditions, can guarantee ultimate escape from a poverty trap, even if the safety net is set within the basin of attraction of the poverty trap, and even if the safety net is only in the form of a public health measure. Whereas the deterministic model implies that small improvements in initial conditions near the poverty-trap equilibrium are futile, the stochastic model suggests that the impact of changes in the location of the safety net on the rate of development may be strongest near the poverty-trap equilibrium.

  8. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias

    With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less

  9. DISFRAC Version 2.0 Users Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Kristine B; Erickson, Marjorie A; Williams, Paul T

    2013-01-01

    DISFRAC is the implementation of a theoretical, multi-scale model for the prediction of fracture toughness in the ductile-to-brittle transition temperature (DBTT) region of ferritic steels. Empirically-derived models of the DBTT region cannot legitimately be extrapolated beyond the range of existing fracture toughness data. DISFRAC requires only tensile properties and microstructural information as input, and thus allows for a wider range of application than empirical, toughness data dependent models. DISFRAC is also a framework for investigating the roles of various microstructural and macroscopic effects on fracture behavior, including carbide particle sizes, grain sizes, strain rates, and material condition. DISFRAC s novelmore » approach is to assess the interaction effects of macroscopic conditions (geometry, loading conditions) with variable microstructural features on cleavage crack initiation and propagation. The model addresses all stages of the fracture process, from microcrack initiation within a carbide particle, to propagation of that crack through grains and across grain boundaries, finally to catastrophic failure of the material. The DISFRAC procedure repeatedly performs a deterministic analysis of microcrack initiation and propagation within a macroscopic crack plastic zone to calculate a critical fracture toughness value for each microstructural geometry set. The current version of DISFRAC, version 2.0, is a research code for developing and testing models related to cleavage fracture and transition toughness. The various models and computations have evolved significantly over the course of development and are expected to continue to evolve as testing and data collection continue. This document serves as a guide to the usage and theoretical foundations of DISFRAC v2.0. Feedback is welcomed and encouraged.« less

  10. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  11. Monte Carlo Study Elucidates the Type 1/Type 2 Choice in Apoptotic Death Signaling in Healthy and Cancer Cells

    PubMed Central

    Raychaudhuri, Subhadip; Raychaudhuri, Somkanya C

    2013-01-01

    Apoptotic cell death is coordinated through two distinct (type 1 and type 2) intracellular signaling pathways. How the type 1/type 2 choice is made remains a central problem in the biology of apoptosis and has implications for apoptosis related diseases and therapy. We study the problem of type 1/type 2 choice in silico utilizing a kinetic Monte Carlo model of cell death signaling. Our results show that the type 1/type 2 choice is linked to deterministic versus stochastic cell death activation, elucidating a unique regulatory control of the apoptotic pathways. Consistent with previous findings, our results indicate that caspase 8 activation level is a key regulator of the choice between deterministic type 1 and stochastic type 2 pathways, irrespective of cell types. Expression levels of signaling molecules downstream also regulate the type 1/type 2 choice. A simplified model of DISC clustering elucidates the mechanism of increased active caspase 8 generation and type 1 activation in cancer cells having increased sensitivity to death receptor activation. We demonstrate that rapid deterministic activation of the type 1 pathway can selectively target such cancer cells, especially if XIAP is also inhibited; while inherent cell-to-cell variability would allow normal cells stay protected. PMID:24709706

  12. Development of TIF based figuring algorithm for deterministic pitch tool polishing

    NASA Astrophysics Data System (ADS)

    Yi, Hyun-Su; Kim, Sug-Whan; Yang, Ho-Soon; Lee, Yun-Woo

    2007-12-01

    Pitch is perhaps the oldest material used for optical polishing, leaving superior surface texture, and has been used widely in the optics shop floor. However, for its unpredictable controllability of removal characteristics, the pitch tool polishing has been rarely analysed quantitatively and many optics shops rely heavily on optician's "feel" even today. In order to bring a degree of process controllability to the pitch tool polishing, we added motorized tool motions to the conventional Draper type polishing machine and modelled the tool path in the absolute machine coordinate. We then produced a number of Tool Influence Function (TIF) both from an analytical model and a series of experimental polishing runs using the pitch tool. The theoretical TIFs agreed well with the experimental TIFs to the profile accuracy of 79 % in terms of its shape. The surface figuring algorithm was then developed in-house utilizing both theoretical and experimental TIFs. We are currently undertaking a series of trial figuring experiments to prove the performance of the polishing algorithm, and the early results indicate that the highly deterministic material removal control with the pitch tool can be achieved to a certain level of form error. The machine renovation, TIF theory and experimental confirmation, figuring simulation results are reported together with implications to deterministic polishing.

  13. Probabilistic Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Strack, William C.; Nagpal, Vinod K.

    2010-01-01

    PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.

  14. Distribution and regulation of stochasticity and plasticity in Saccharomyces cerevisiae

    DOE PAGES

    Dar, R. D.; Karig, D. K.; Cooke, J. F.; ...

    2010-09-01

    Stochasticity is an inherent feature of complex systems with nanoscale structure. In such systems information is represented by small collections of elements (e.g. a few electrons on a quantum dot), and small variations in the populations of these elements may lead to big uncertainties in the information. Unfortunately, little is known about how to work within this inherently noisy environment to design robust functionality into complex nanoscale systems. Here, we look to the biological cell as an intriguing model system where evolution has mediated the trade-offs between fluctuations and function, and in particular we look at the relationships and trade-offsmore » between stochastic and deterministic responses in the gene expression of budding yeast (Saccharomyces cerevisiae). We find gene regulatory arrangements that control the stochastic and deterministic components of expression, and show that genes that have evolved to respond to stimuli (stress) in the most strongly deterministic way exhibit the most noise in the absence of the stimuli. We show that this relationship is consistent with a bursty 2-state model of gene expression, and demonstrate that this regulatory motif generates the most uncertainty in gene expression when there is the greatest uncertainty in the optimal level of gene expression.« less

  15. A family of small-world network models built by complete graph and iteration-function

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Yao, Bing

    2018-02-01

    Small-world networks are popular in real-life complex systems. In the past few decades, researchers presented amounts of small-world models, in which some are stochastic and the rest are deterministic. In comparison with random models, it is not only convenient but also interesting to study the topological properties of deterministic models in some fields, such as graph theory, theorem computer sciences and so on. As another concerned darling in current researches, community structure (modular topology) is referred to as an useful statistical parameter to uncover the operating functions of network. So, building and studying such models with community structure and small-world character will be a demanded task. Hence, in this article, we build a family of sparse network space N(t) which is different from those previous deterministic models. Even though, our models are established in the same way as them, iterative generation. By randomly connecting manner in each time step, every resulting member in N(t) has no absolutely self-similar feature widely shared in a large number of previous models. This makes our insight not into discussing a class certain model, but into investigating a group various ones spanning a network space. Somewhat surprisingly, our results prove all members of N(t) to possess some similar characters: (a) sparsity, (b) exponential-scale feature P(k) ∼α-k, and (c) small-world property. Here, we must stress a very screming, but intriguing, phenomenon that the difference of average path length (APL) between any two members in N(t) is quite small, which indicates this random connecting way among members has no great effect on APL. At the end of this article, as a new topological parameter correlated to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees on a representative member NB(t) of N(t) is studied in detail, then an exact analytical solution for its spanning trees entropy is also obtained.

  16. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  17. A MODEL OF ESTUARY RESPONSE TO NITROGEN LOADING AND FRESHWATER RESIDENCE TIME

    EPA Science Inventory

    We have developed a deterministic model that relates average annual nitrogen loading rate and water residence time in an estuary to in-estuary nitrogen concentrations and loss rates (e.g. denitrification and incorporation in sediments), and to rates of nitrogen export across the ...

  18. SIMULATED CLIMATE CHANGE EFFECTS ON DISSOLVED OXYGEN CHARACTERISTICS IN ICE-COVERED LAKES. (R824801)

    EPA Science Inventory

    A deterministic, one-dimensional model is presented which simulates daily dissolved oxygen (DO) profiles and associated water temperatures, ice covers and snow covers for dimictic and polymictic lakes of the temperate zone. The lake parameters required as model input are surface ...

  19. Modeling heterogeneous responsiveness of intrinsic apoptosis pathway

    PubMed Central

    2013-01-01

    Background Apoptosis is a cell suicide mechanism that enables multicellular organisms to maintain homeostasis and to eliminate individual cells that threaten the organism’s survival. Dependent on the type of stimulus, apoptosis can be propagated by extrinsic pathway or intrinsic pathway. The comprehensive understanding of the molecular mechanism of apoptotic signaling allows for development of mathematical models, aiming to elucidate dynamical and systems properties of apoptotic signaling networks. There have been extensive efforts in modeling deterministic apoptosis network accounting for average behavior of a population of cells. Cellular networks, however, are inherently stochastic and significant cell-to-cell variability in apoptosis response has been observed at single cell level. Results To address the inevitable randomness in the intrinsic apoptosis mechanism, we develop a theoretical and computational modeling framework of intrinsic apoptosis pathway at single-cell level, accounting for both deterministic and stochastic behavior. Our deterministic model, adapted from the well-accepted Fussenegger model, shows that an additional positive feedback between the executioner caspase and the initiator caspase plays a fundamental role in yielding the desired property of bistability. We then examine the impact of intrinsic fluctuations of biochemical reactions, viewed as intrinsic noise, and natural variation of protein concentrations, viewed as extrinsic noise, on behavior of the intrinsic apoptosis network. Histograms of the steady-state output at varying input levels show that the intrinsic noise could elicit a wider region of bistability over that of the deterministic model. However, the system stochasticity due to intrinsic fluctuations, such as the noise of steady-state response and the randomness of response delay, shows that the intrinsic noise in general is insufficient to produce significant cell-to-cell variations at physiologically relevant level of molecular numbers. Furthermore, the extrinsic noise represented by random variations of two key apoptotic proteins, namely Cytochrome C and inhibitor of apoptosis proteins (IAP), is modeled separately or in combination with intrinsic noise. The resultant stochasticity in the timing of intrinsic apoptosis response shows that the fluctuating protein variations can induce cell-to-cell stochastic variability at a quantitative level agreeing with experiments. Finally, simulations illustrate that the mean abundance of fluctuating IAP protein is positively correlated with the degree of cellular stochasticity of the intrinsic apoptosis pathway. Conclusions Our theoretical and computational study shows that the pronounced non-genetic heterogeneity in intrinsic apoptosis responses among individual cells plausibly arises from extrinsic rather than intrinsic origin of fluctuations. In addition, it predicts that the IAP protein could serve as a potential therapeutic target for suppression of the cell-to-cell variation in the intrinsic apoptosis responsiveness. PMID:23875784

  20. Engineering Risk Assessment of Space Thruster Challenge Problem

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Mattenberger, Christopher J.; Go, Susie

    2014-01-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center utilizes dynamic models with linked physics-of-failure analyses to produce quantitative risk assessments of space exploration missions. This paper applies the ERA approach to the baseline and extended versions of the PSAM Space Thruster Challenge Problem, which investigates mission risk for a deep space ion propulsion system with time-varying thruster requirements and operations schedules. The dynamic mission is modeled using a combination of discrete and continuous-time reliability elements within the commercially available GoldSim software. Loss-of-mission (LOM) probability results are generated via Monte Carlo sampling performed by the integrated model. Model convergence studies are presented to illustrate the sensitivity of integrated LOM results to the number of Monte Carlo trials. A deterministic risk model was also built for the three baseline and extended missions using the Ames Reliability Tool (ART), and results are compared to the simulation results to evaluate the relative importance of mission dynamics. The ART model did a reasonable job of matching the simulation models for the baseline case, while a hybrid approach using offline dynamic models was required for the extended missions. This study highlighted that state-of-the-art techniques can adequately adapt to a range of dynamic problems.

  1. Discrete stochastic analogs of Erlang epidemic models.

    PubMed

    Getz, Wayne M; Dougherty, Eric R

    2018-12-01

    Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.

  2. An application of ensemble/multi model approach for wind power production forecast.

    NASA Astrophysics Data System (ADS)

    Alessandrini, S.; Decimi, G.; Hagedorn, R.; Sperati, S.

    2010-09-01

    The wind power forecast of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast is based on a mesoscale meteorological models that provides the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. The corrected wind data are then used as input in the wind farm power curve to obtain the power forecast. These computations require historical time series of wind measured data (by an anemometer located in the wind farm or on the nacelle) and power data in order to be able to perform the statistical analysis on the past. For this purpose a Neural Network (NN) is trained on the past data and then applied in the forecast task. Considering that the anemometer measurements are not always available in a wind farm a different approach has also been adopted. A training of the NN to link directly the forecasted meteorological data and the power data has also been performed. The normalized RMSE forecast error seems to be lower in most cases by following the second approach. We have examined two wind farms, one located in Denmark on flat terrain and one located in a mountain area in the south of Italy (Sicily). In both cases we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by using two or more models (RAMS, ECMWF deterministic, LAMI, HIRLAM). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error of at least 1% compared to the singles models approach. Moreover the use of a deterministic global model, (e.g. ECMWF deterministic model) seems to reach similar level of accuracy of those of the mesocale models (LAMI and RAMS). Finally we have focused on the possibility of using the ensemble model (ECMWF) to estimate the hourly, three days ahead, power forecast accuracy. Contingency diagram between RMSE of the deterministic power forecast and the ensemble members spread of wind forecast have been produced. From this first analysis it seems that ensemble spread could be used as an indicator of the forecast's accuracy at least for the first day ahead period. In fact low spreads often correspond to low forecast error. For longer forecast horizon the correlation between RMSE and ensemble spread decrease becoming too low to be used for this purpose.

  3. Stochastic blockmodeling of the modules and core of the Caenorhabditis elegans connectome.

    PubMed

    Pavlovic, Dragana M; Vértes, Petra E; Bullmore, Edward T; Schafer, William R; Nichols, Thomas E

    2014-01-01

    Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4-5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the "core-in-modules" decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems.

  4. Paleoclimatic significance of δD and δ13C values in pinon pine needles from packrat middens spanning the last 40,000 years

    USGS Publications Warehouse

    Pendall, Elise; Betancourt, Julio L.; Leavitt, Steven W.

    1999-01-01

    We compared two approaches to interpreting δD of cellulose nitrate in piñon pine needles (Pinus edulis) preserved in packrat middens from central New Mexico, USA. One approach was based on linear regression between modern δD values and climate parameters, and the other on a deterministic isotope model, modified from Craig and Gordon's terminal lake evaporation model that assumes steady-state conditions and constant isotope effects. One such effect, the net biochemical fractionation factor, was determined for a new species, piñon pine. Regressions showed that δD values in cellulose nitrate from annual cohorts of needles (1989–1996) were strongly correlated with growing season (May–August) precipitation amount, and δ13C values in the same samples were correlated with June relative humidity. The deterministic model reconstructed δD values of meteoric water used by plants after constraining relative humidity effects with δ13C values; growing season temperatures were estimated via modern correlations with δD values of meteoric water. Variations of this modeling approach have been applied to tree-ring cellulose before, but not to macrofossil cellulose, and comparisons to empirical relationships have not been provided. Results from fossil piñon needles spanning the last ∼40,000 years showed no significant trend in δD values of cellulose nitrate, suggesting either no change in the amount of summer precipitation (based on the transfer function) or δD values of meteoric water or temperature (based on the deterministic model). However, there were significant differences in δ13C values, and therefore relative humidity, between Pleistocene and Holocene.

  5. Forward and Inverse Modeling of Self-potential. A Tomography of Groundwater Flow and Comparison Between Deterministic and Stochastic Inversion Methods

    NASA Astrophysics Data System (ADS)

    Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.

    2016-12-01

    Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.

  6. Deterministic radiative coupling of two semiconductor quantum dots to the optical mode of a photonic crystal nanocavity.

    PubMed

    Calic, M; Jarlov, C; Gallo, P; Dwir, B; Rudra, A; Kapon, E

    2017-06-22

    A system of two site-controlled semiconductor quantum dots (QDs) is deterministically integrated with a photonic crystal membrane nano-cavity. The two QDs are identified via their reproducible emission spectral features, and their coupling to the fundamental cavity mode is established by emission co-polarization and cavity feeding features. A theoretical model accounting for phonon interaction and pure dephasing reproduces the observed results and permits extraction of the light-matter coupling constant for this system. The demonstrated approach offers a platform for scaling up the integration of QD systems and nano-photonic elements for integrated quantum photonics applications.

  7. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  8. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Soil erosion assessment - Mind the gap

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone

    2016-12-01

    Accurate assessment of erosion rates remains an elusive problem because soil loss is strongly nonunique with respect to the main drivers. In addressing the mechanistic causes of erosion responses, we discriminate between macroscale effects of external factors - long studied and referred to as "geomorphic external variability", and microscale effects, introduced as "geomorphic internal variability." The latter source of erosion variations represents the knowledge gap, an overlooked but vital element of geomorphic response, significantly impacting the low predictability skill of deterministic models at field-catchment scales. This is corroborated with experiments using a comprehensive physical model that dynamically updates the soil mass and particle composition. As complete knowledge of microscale conditions for arbitrary location and time is infeasible, we propose that new predictive frameworks of soil erosion should embed stochastic components in deterministic assessments of external and internal types of geomorphic variability.

  10. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  11. Influence of the hypercycle on the error threshold: a stochastic approach.

    PubMed

    García-Tejedor, A; Sanz-Nuño, J C; Olarrea, J; Javier de la Rubia, F; Montero, F

    1988-10-21

    The role of fluctuations on the error threshold of the hypercycle has been studied by a stochastic approach on a very simplified model. For this model, the master equation was derived and its unique steady state calculated. This state implies the extinction of the system. But the actual time necessary to reach the steady state may be astronomically long whereas for times of experimental interest the system could be near some quasi-stationary states. In order to explore this possibility a Gillespie simulation of the stochastic process has been carried out. These quasi-stationary states correspond to the deterministic steady states of the system. The error threshold shifts towards higher values of the quality factor Q. Moreover, information about the fluctuations around the quasi-stationary states is obtained. The results are discussed in relation to the deterministic states.

  12. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  13. Modelling uncertainties in the diffusion-advection equation for radon transport in soil using interval arithmetic.

    PubMed

    Chakraverty, S; Sahoo, B K; Rao, T D; Karunakar, P; Sapra, B K

    2018-02-01

    Modelling radon transport in the earth crust is a useful tool to investigate the changes in the geo-physical processes prior to earthquake event. Radon transport is modeled generally through the deterministic advection-diffusion equation. However, in order to determine the magnitudes of parameters governing these processes from experimental measurements, it is necessary to investigate the role of uncertainties in these parameters. Present paper investigates this aspect by combining the concept of interval uncertainties in transport parameters such as soil diffusivity, advection velocity etc, occurring in the radon transport equation as applied to soil matrix. The predictions made with interval arithmetic have been compared and discussed with the results of classical deterministic model. The practical applicability of the model is demonstrated through a case study involving radon flux measurements at the soil surface with an accumulator deployed in steady-state mode. It is possible to detect the presence of very low levels of advection processes by applying uncertainty bounds on the variations in the observed concentration data in the accumulator. The results are further discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Simple Deterministically Constructed Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  15. Faust: Flexible Acquistion and Understanding System for Text

    DTIC Science & Technology

    2013-07-01

    second version is still underway and it will continue in development as part of the DARPA DEFT program; it is written in Java and Clojure with MySQL and...SUTime, a Java library that recognizes and normalizes temporal expressions using deterministic patterns [101]. UIUC made another such framework... Java -based, large-scale inference engine called Tuffy. It leverages the full power of a relational optimizer in an RDBMS to perform the grounding of MLN

  16. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based, mixed deterministic-probabilistic eruption forecasting approach in reducing and quantifying these uncertainties.

  17. Stochastic maps, continuous approximation, and stable distribution

    NASA Astrophysics Data System (ADS)

    Kessler, David A.; Burov, Stanislav

    2017-10-01

    A continuous approximation framework for general nonlinear stochastic as well as deterministic discrete maps is developed. For the stochastic map with uncorelated Gaussian noise, by successively applying the Itô lemma, we obtain a Langevin type of equation. Specifically, we show how nonlinear maps give rise to a Langevin description that involves multiplicative noise. The multiplicative nature of the noise induces an additional effective force, not present in the absence of noise. We further exploit the continuum description and provide an explicit formula for the stable distribution of the stochastic map and conditions for its existence. Our results are in good agreement with numerical simulations of several maps.

  18. Exploring Reading Comprehension Skill Relationships through the G-DINA Model

    ERIC Educational Resources Information Center

    Chen, Huilin; Chen, Jinsong

    2016-01-01

    By analysing the test data of 1029 British secondary school students' performance on 20 Programme for International Student Assessment English reading items through the generalised deterministic input, noisy "and" gate (G-DINA) model, the study conducted two investigations on exploring the relationships among the five reading…

  19. The "Chaos" Pattern in Piaget's Theory of Cognitive Development.

    ERIC Educational Resources Information Center

    Lindsay, Jean S.

    Piaget's theory of the cognitive development of the child is related to the recently developed non-linear "chaos" model. The term "chaos" refers to the tendency of dynamical, non-linear systems toward irregular, sometimes unpredictable, deterministic behavior. Piaget identified this same pattern in his model of cognitive…

  20. PROJECTED POPULATION-LEVEL EFFECTS OF THIOBENCARB EXPOSURE ON THE MYSID, AMERICAMYSIS BAHIA, AND EXTINCTION PROBABILITY IN A CONCENTRATION-DECAY EXPOSURE SYSTEM

    EPA Science Inventory



    Population-level effects of the mysid, Americamysis bahia, exposed to varying thiobencarb concentrations were estimated using stage-structured matrix models. A deterministic density-independent matrix model estimated the decrease in population growth rate, l, with increas...

  1. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 1 EXPOSURE MODELING

    EPA Science Inventory

    Exposure to contaminants originating in the domestic water supply is influenced by a number of factors, including human activities, water use behavior, and physical and chemical processes. The key role of human activities is very apparent in exposure related to volatile water-...

  2. Stable cycling in discrete-time genetic models.

    PubMed

    Hastings, A

    1981-11-01

    Examples of stable cycling are discussed for two-locus, two-allele, deterministic, discrete-time models with constant fitnesses. The cases that cycle were found by using numerical techniques to search for stable Hopf bifurcations. One consequence of the results is that apparent cases of directional selection may be due to stable cycling.

  3. Comparison of Four Probabilistic Models (CARES, Calendex, ConsEspo, SHEDS) to Estimate Aggregate Residential Exposures to Pesticides

    EPA Science Inventory

    Two deterministic models (US EPA’s Office of Pesticide Programs Residential Standard Operating Procedures (OPP Residential SOPs) and Draft Protocol for Measuring Children’s Non-Occupational Exposure to Pesticides by all Relevant Pathways (Draft Protocol)) and four probabilistic mo...

  4. An Equilibrium Flow Model of a University Campus.

    ERIC Educational Resources Information Center

    Oliver, Robert M.; Hopkins, David S. P.

    This paper develops a simple deterministic model that relates student admissions and enrollments to the final demand for educated students. It includes the effects of dropout rates and student-teacher ratios on student enrollments and faculty staffing levels. Certain technological requirements are assumed known and given. These, as well as the…

  5. Neural nets with terminal chaos for simulation of non-deterministic patterns

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1993-01-01

    Models for simulating some aspects of neural intelligence are presented and discussed. Special attention is given to terminal neurodynamics as a particular architecture of terminal dynamics suitable for modeling information flows. Applications of terminal chaos to information fusion as well as to planning and modeling coordination among neurons in biological systems are disussed.

  6. Creating a stage-based deterministic PVA model - the western prairie fringed orchid [Exercise 12

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    Contemporary efforts to conserve populations and species often employ population viability analysis (PVA), a specific application of population modeling that estimates the effects of environmental and demographic processes on population growth rates. These models can also be used to estimate probabilities that a population will fall below a certain level. This...

  7. Refinement of the Arc-Habcap model to predict habitat effectiveness for elk

    Treesearch

    Lakhdar Benkobi; Mark A. Rumble; Gary C. Brundige; Joshua J. Millspaugh

    2004-01-01

    Wildlife habitat modeling is increasingly important for managers who need to assess the effects of land management activities. We evaluated the performance of a spatially explicit deterministic habitat model (Arc-Habcap) that predicts habitat effectiveness for elk. We used five years of radio-telemetry locations of elk from Custer State Park (CSP), South Dakota, to...

  8. Assessing Potential Climate Change Effects on Loblolly Pine Growth: A Probabilistic Regional Modeling Approach

    Treesearch

    Peter B. Woodbury; James E. Smith; David A. Weinstein; John A. Laurence

    1998-01-01

    Most models of the potential effects of climate change on forest growth have produced deterministic predictions. However, there are large uncertainties in data on regional forest condition, estimates of future climate, and quantitative relationships between environmental conditions and forest growth rate. We constructed a new model to analyze these uncertainties...

  9. Application of a Cognitive Diagnostic Model to a High-Stakes Reading Comprehension Test

    ERIC Educational Resources Information Center

    Ravand, Hamdollah

    2016-01-01

    General cognitive diagnostic models (CDM) such as the generalized deterministic input, noisy, "and" gate (G-DINA) model are flexible in that they allow for both compensatory and noncompensatory relationships among the subskills within the same test. Most of the previous CDM applications in the literature have been add-ons to simulation…

  10. Analyzing chromatographic data using multilevel modeling.

    PubMed

    Wiczling, Paweł

    2018-06-01

    It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.

  11. Nuclear and radiological terrorism: continuing education article.

    PubMed

    Anderson, Peter D; Bokor, Gyula

    2013-06-01

    Terrorism involving radioactive materials includes improvised nuclear devices, radiation exposure devices, contamination of food sources, radiation dispersal devices, or an attack on a nuclear power plant or a facility/vehicle that houses radioactive materials. Ionizing radiation removes electrons from atoms and changes the valence of the electrons enabling chemical reactions with elements that normally do not occur. Ionizing radiation includes alpha rays, beta rays, gamma rays, and neutron radiation. The effects of radiation consist of stochastic and deterministic effects. Cancer is the typical example of a stochastic effect of radiation. Deterministic effects include acute radiation syndrome (ARS). The hallmarks of ARS are damage to the skin, gastrointestinal tract, hematopoietic tissue, and in severe cases the neurovascular structures. Radiation produces psychological effects in addition to physiological effects. Radioisotopes relevant to terrorism include titrium, americium 241, cesium 137, cobalt 60, iodine 131, plutonium 238, califormium 252, iridium 192, uranium 235, and strontium 90. Medications used for treating a radiation exposure include antiemetics, colony-stimulating factors, antibiotics, electrolytes, potassium iodine, and chelating agents.

  12. Hardware-efficient Bell state preparation using Quantum Zeno Dynamics in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Flurin, Emmanuel; Blok, Machiel; Hacohen-Gourgy, Shay; Martin, Leigh S.; Livingston, William P.; Dove, Allison; Siddiqi, Irfan

    By preforming a continuous joint measurement on a two qubit system, we restrict the qubit evolution to a chosen subspace of the total Hilbert space. This extension of the quantum Zeno effect, called Quantum Zeno Dynamics, has already been explored in various physical systems such as superconducting cavities, single rydberg atoms, atomic ensembles and Bose Einstein condensates. In this experiment, two superconducting qubits are strongly dispersively coupled to a high-Q cavity (χ >> κ) allowing for the doubly excited state | 11 〉 to be selectively monitored. The Quantum Zeno Dynamics in the complementary subspace enables us to coherently prepare a Bell state. As opposed to dissipation engineering schemes, we emphasize that our protocol is deterministic, does not rely direct coupling between qubits and functions only using single qubit controls and cavity readout. Such Quantum Zeno Dynamics can be generalized to larger Hilbert space enabling deterministic generation of many-body entangled states, and thus realizes a decoherence-free subspace allowing alternative noise-protection schemes.

  13. Stochastic switching in biology: from genotype to phenotype

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-03-01

    There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.

  14. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  15. Non-Deterministic Modelling of Food-Web Dynamics

    PubMed Central

    Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam

    2014-01-01

    A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as ‘null models of food-webs’ as originally advocated. PMID:25299245

  16. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  17. A deterministic mathematical model for bidirectional excluded flow with Langmuir kinetics.

    PubMed

    Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2017-01-01

    In many important cellular processes, including mRNA translation, gene transcription, phosphotransfer, and intracellular transport, biological "particles" move along some kind of "tracks". The motion of these particles can be modeled as a one-dimensional movement along an ordered sequence of sites. The biological particles (e.g., ribosomes or RNAPs) have volume and cannot surpass one another. In some cases, there is a preferred direction of movement along the track, but in general the movement may be bidirectional, and furthermore the particles may attach or detach from various regions along the tracks. We derive a new deterministic mathematical model for such transport phenomena that may be interpreted as a dynamic mean-field approximation of an important model from mechanical statistics called the asymmetric simple exclusion process (ASEP) with Langmuir kinetics. Using tools from the theory of monotone dynamical systems and contraction theory we show that the model admits a unique steady-state, and that every solution converges to this steady-state. Furthermore, we show that the model entrains (or phase locks) to periodic excitations in any of its forward, backward, attachment, or detachment rates. We demonstrate an application of this phenomenological transport model for analyzing ribosome drop off in mRNA translation.

  18. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    NASA Astrophysics Data System (ADS)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  19. Modeling a SI epidemic with stochastic transmission: hyperbolic incidence rate.

    PubMed

    Christen, Alejandra; Maulén-Yañez, M Angélica; González-Olivares, Eduardo; Curé, Michel

    2018-03-01

    In this paper a stochastic susceptible-infectious (SI) epidemic model is analysed, which is based on the model proposed by Roberts and Saha (Appl Math Lett 12: 37-41, 1999), considering a hyperbolic type nonlinear incidence rate. Assuming the proportion of infected population varies with time, our new model is described by an ordinary differential equation, which is analogous to the equation that describes the double Allee effect. The limit of the solution of this equation (deterministic model) is found when time tends to infinity. Then, the asymptotic behaviour of a stochastic fluctuation due to the environmental variation in the coefficient of disease transmission is studied. Thus a stochastic differential equation (SDE) is obtained and the existence of a unique solution is proved. Moreover, the SDE is analysed through the associated Fokker-Planck equation to obtain the invariant measure when the proportion of the infected population reaches steady state. An explicit expression for invariant measure is found and we study some of its properties. The long time behaviour of deterministic and stochastic models are compared by simulations. According to our knowledge this incidence rate has not been previously used for this type of epidemic models.

  20. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

Top