Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction
2016-02-25
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
A non-linear dimension reduction methodology for generating data-driven stochastic input models
NASA Astrophysics Data System (ADS)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
2008-06-01
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.
A non-linear dimension reduction methodology for generating data-driven stochastic input models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<
Stochastic Human Exposure and Dose Simulation Model for Pesticides
SHEDS-Pesticides (Stochastic Human Exposure and Dose Simulation Model for Pesticides) is a physically-based stochastic model developed to quantify exposure and dose of humans to multimedia, multipathway pollutants. Probabilistic inputs are combined in physical/mechanistic algorit...
Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs
NASA Astrophysics Data System (ADS)
Harvey, David Benjamin Paul
A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.
Machine learning from computer simulations with applications in rail vehicle dynamics
NASA Astrophysics Data System (ADS)
Taheri, Mehdi; Ahmadian, Mehdi
2016-05-01
The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.
The ISI distribution of the stochastic Hodgkin-Huxley neuron.
Rowat, Peter F; Greenwood, Priscilla E
2014-01-01
The simulation of ion-channel noise has an important role in computational neuroscience. In recent years several approximate methods of carrying out this simulation have been published, based on stochastic differential equations, and all giving slightly different results. The obvious, and essential, question is: which method is the most accurate and which is most computationally efficient? Here we make a contribution to the answer. We compare interspike interval histograms from simulated data using four different approximate stochastic differential equation (SDE) models of the stochastic Hodgkin-Huxley neuron, as well as the exact Markov chain model simulated by the Gillespie algorithm. One of the recent SDE models is the same as the Kurtz approximation first published in 1978. All the models considered give similar ISI histograms over a wide range of deterministic and stochastic input. Three features of these histograms are an initial peak, followed by one or more bumps, and then an exponential tail. We explore how these features depend on deterministic input and on level of channel noise, and explain the results using the stochastic dynamics of the model. We conclude with a rough ranking of the four SDE models with respect to the similarity of their ISI histograms to the histogram of the exact Markov chain model.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Suprathreshold stochastic resonance in neural processing tuned by correlation.
Durrant, Simon; Kang, Yanmei; Stocks, Nigel; Feng, Jianfeng
2011-07-01
Suprathreshold stochastic resonance (SSR) is examined in the context of integrate-and-fire neurons, with an emphasis on the role of correlation in the neuronal firing. We employed a model based on a network of spiking neurons which received synaptic inputs modeled by Poisson processes stimulated by a stepped input signal. The smoothed ensemble firing rate provided an output signal, and the mutual information between this signal and the input was calculated for networks with different noise levels and different numbers of neurons. It was found that an SSR effect was present in this context. We then examined a more biophysically plausible scenario where the noise was not controlled directly, but instead was tuned by the correlation between the inputs. The SSR effect remained present in this scenario with nonzero noise providing improved information transmission, and it was found that negative correlation between the inputs was optimal. Finally, an examination of SSR in the context of this model revealed its connection with more traditional stochastic resonance and showed a trade-off between supratheshold and subthreshold components. We discuss these results in the context of existing empirical evidence concerning correlations in neuronal firing.
Suprathreshold stochastic resonance in neural processing tuned by correlation
NASA Astrophysics Data System (ADS)
Durrant, Simon; Kang, Yanmei; Stocks, Nigel; Feng, Jianfeng
2011-07-01
Suprathreshold stochastic resonance (SSR) is examined in the context of integrate-and-fire neurons, with an emphasis on the role of correlation in the neuronal firing. We employed a model based on a network of spiking neurons which received synaptic inputs modeled by Poisson processes stimulated by a stepped input signal. The smoothed ensemble firing rate provided an output signal, and the mutual information between this signal and the input was calculated for networks with different noise levels and different numbers of neurons. It was found that an SSR effect was present in this context. We then examined a more biophysically plausible scenario where the noise was not controlled directly, but instead was tuned by the correlation between the inputs. The SSR effect remained present in this scenario with nonzero noise providing improved information transmission, and it was found that negative correlation between the inputs was optimal. Finally, an examination of SSR in the context of this model revealed its connection with more traditional stochastic resonance and showed a trade-off between supratheshold and subthreshold components. We discuss these results in the context of existing empirical evidence concerning correlations in neuronal firing.
Stochastic Multiscale Analysis and Design of Engine Disks
2010-07-28
shown recently to fail when used with data-driven non-linear stochastic input models (KPCA, IsoMap, etc.). Need for scalable exascale computing algorithms Materials Process Design and Control Laboratory Cornell University
Pirozzi, Enrica
2018-04-01
High variability in the neuronal response to stimulations and the adaptation phenomenon cannot be explained by the standard stochastic leaky integrate-and-fire model. The main reason is that the uncorrelated inputs involved in the model are not realistic. There exists some form of dependency between the inputs, and it can be interpreted as memory effects. In order to include these physiological features in the standard model, we reconsider it with time-dependent coefficients and correlated inputs. Due to its hard mathematical tractability, we perform simulations of it for a wide investigation of its output. A Gauss-Markov process is constructed for approximating its non-Markovian dynamics. The first passage time probability density of such a process can be numerically evaluated, and it can be used to fit the histograms of simulated firing times. Some estimates of the moments of firing times are also provided. The effect of the correlation time of the inputs on firing densities and on firing rates is shown. An exponential probability density of the first firing time is estimated for low values of input current and high values of correlation time. For comparison, a simulation-based investigation is also carried out for a fractional stochastic model that allows to preserve the memory of the time evolution of the neuronal membrane potential. In this case, the memory parameter that affects the firing activity is the fractional derivative order. In both models an adaptation level of spike frequency is attained, even if along different modalities. Comparisons and discussion of the obtained results are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin
multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.
A stochastic model of input effectiveness during irregular gamma rhythms.
Dumont, Grégory; Northoff, Georg; Longtin, André
2016-02-01
Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of magnitude using two coupled stochastic differential equations, one for each population. Our work thus yields a fast tool to numerically and analytically investigate CTC in a noisy context. It shows that CTC can be quite vulnerable to rhythm and input variability, which both decrease phase preference.
Modelling ecosystem service flows under uncertainty with stochiastic SPAN
Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.
2012-01-01
Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2016-05-01
Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.
Evaluating Kuala Lumpur stock exchange oriented bank performance with stochastic frontiers
NASA Astrophysics Data System (ADS)
Baten, M. A.; Maznah, M. K.; Razamin, R.; Jastini, M. J.
2014-12-01
Banks play an essential role in the economic development and banks need to be efficient; otherwise, they may create blockage in the process of development in any country. The efficiency of banks in Malaysia is important and should receive greater attention. This study formulated an appropriate stochastic frontier model to investigate the efficiency of banks which are traded on Kuala Lumpur Stock Exchange (KLSE) market during the period 2005-2009. All data were analyzed to obtain the maximum likelihood method to estimate the parameters of stochastic production. Unlike the earlier studies which use balance sheet and income statements data, this study used market data as the input and output variables. It was observed that banks listed in KLSE exhibited a commendable overall efficiency level of 96.2% during 2005-2009 hence suggesting minimal input waste of 3.8%. Among the banks, the COMS (Cimb Group Holdings) bank is found to be highly efficient with a score of 0.9715 and BIMB (Bimb Holdings) bank is noted to have the lowest efficiency with a score of 0.9582. The results also show that Cobb-Douglas stochastic frontier model with truncated normal distributional assumption is preferable than Translog stochastic frontier model.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Stochastic Watershed Models for Risk Based Decision Making
NASA Astrophysics Data System (ADS)
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Stochastic Simulation Tool for Aerospace Structural Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F.; Moore, David F.
2006-01-01
Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.
NASA Astrophysics Data System (ADS)
Liu, Jian; Ruan, Xiaoe
2017-07-01
This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.
Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C
2017-06-01
The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.
Stochastic empirical loading and dilution model (SELDM) version 1.0.0
Granato, Gregory E.
2013-01-01
The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
Evaluating Kuala Lumpur stock exchange oriented bank performance with stochastic frontiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baten, M. A.; Maznah, M. K.; Razamin, R.
Banks play an essential role in the economic development and banks need to be efficient; otherwise, they may create blockage in the process of development in any country. The efficiency of banks in Malaysia is important and should receive greater attention. This study formulated an appropriate stochastic frontier model to investigate the efficiency of banks which are traded on Kuala Lumpur Stock Exchange (KLSE) market during the period 2005–2009. All data were analyzed to obtain the maximum likelihood method to estimate the parameters of stochastic production. Unlike the earlier studies which use balance sheet and income statements data, this studymore » used market data as the input and output variables. It was observed that banks listed in KLSE exhibited a commendable overall efficiency level of 96.2% during 2005–2009 hence suggesting minimal input waste of 3.8%. Among the banks, the COMS (Cimb Group Holdings) bank is found to be highly efficient with a score of 0.9715 and BIMB (Bimb Holdings) bank is noted to have the lowest efficiency with a score of 0.9582. The results also show that Cobb-Douglas stochastic frontier model with truncated normal distributional assumption is preferable than Translog stochastic frontier model.« less
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
An agent-based stochastic Occupancy Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Hong, Tianzhen; Luo, Xuan
Occupancy has significant impacts on building performance. However, in current building performance simulation programs, occupancy inputs are static and lack diversity, contributing to discrepancies between the simulated and actual building performance. This work presents an Occupancy Simulator that simulates the stochastic behavior of occupant presence and movement in buildings, capturing the spatial and temporal occupancy diversity. Each occupant and each space in the building are explicitly simulated as an agent with their profiles of stochastic behaviors. The occupancy behaviors are represented with three types of models: (1) the status transition events (e.g., first arrival in office) simulated with probability distributionmore » model, (2) the random moving events (e.g., from one office to another) simulated with a homogeneous Markov chain model, and (3) the meeting events simulated with a new stochastic model. A hierarchical data model was developed for the Occupancy Simulator, which reduces the amount of data input by using the concepts of occupant types and space types. Finally, a case study of a small office building is presented to demonstrate the use of the Simulator to generate detailed annual sub-hourly occupant schedules for individual spaces and the whole building. The Simulator is a web application freely available to the public and capable of performing a detailed stochastic simulation of occupant presence and movement in buildings. Future work includes enhancements in the meeting event model, consideration of personal absent days, verification and validation of the simulated occupancy results, and expansion for use with residential buildings.« less
An agent-based stochastic Occupancy Simulator
Chen, Yixing; Hong, Tianzhen; Luo, Xuan
2017-06-01
Occupancy has significant impacts on building performance. However, in current building performance simulation programs, occupancy inputs are static and lack diversity, contributing to discrepancies between the simulated and actual building performance. This work presents an Occupancy Simulator that simulates the stochastic behavior of occupant presence and movement in buildings, capturing the spatial and temporal occupancy diversity. Each occupant and each space in the building are explicitly simulated as an agent with their profiles of stochastic behaviors. The occupancy behaviors are represented with three types of models: (1) the status transition events (e.g., first arrival in office) simulated with probability distributionmore » model, (2) the random moving events (e.g., from one office to another) simulated with a homogeneous Markov chain model, and (3) the meeting events simulated with a new stochastic model. A hierarchical data model was developed for the Occupancy Simulator, which reduces the amount of data input by using the concepts of occupant types and space types. Finally, a case study of a small office building is presented to demonstrate the use of the Simulator to generate detailed annual sub-hourly occupant schedules for individual spaces and the whole building. The Simulator is a web application freely available to the public and capable of performing a detailed stochastic simulation of occupant presence and movement in buildings. Future work includes enhancements in the meeting event model, consideration of personal absent days, verification and validation of the simulated occupancy results, and expansion for use with residential buildings.« less
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Kember, G C; Fenton, G A; Armour, J A; Kalyaniwalla, N
2001-04-01
Regional cardiac control depends upon feedback of the status of the heart from afferent neurons responding to chemical and mechanical stimuli as transduced by an array of sensory neurites. Emerging experimental evidence shows that neural control in the heart may be partially exerted using subthreshold inputs that are amplified by noisy mechanical fluctuations. This amplification is known as aperiodic stochastic resonance (ASR). Neural control in the noisy, subthreshold regime is difficult to see since there is a near absence of any correlation between input and the output, the latter being the average firing (spiking) rate of the neuron. This lack of correlation is unresolved by traditional energy models of ASR since these models are unsuitable for identifying "cause and effect" between such inputs and outputs. In this paper, the "competition between averages" model is used to determine what portion of a noisy, subthreshold input is responsible, on average, for the output of sensory neurons as represented by the Fitzhugh-Nagumo equations. A physiologically relevant conclusion of this analysis is that a nearly constant amount of input is responsible for a spike, on average, and this amount is approximately independent of the firing rate. Hence, correlation measures are generally reduced as the firing rate is lowered even though neural control under this model is actually unaffected.
A stochastic chemostat model with an inhibitor and noise independent of population sizes
NASA Astrophysics Data System (ADS)
Sun, Shulin; Zhang, Xiaolu
2018-02-01
In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.
A dual theory of price and value in a meso-scale economic model with stochastic profit rate
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2014-12-01
The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
Uncertainty analysis of geothermal energy economics
NASA Astrophysics Data System (ADS)
Sener, Adil Caner
This dissertation research endeavors to explore geothermal energy economics by assessing and quantifying the uncertainties associated with the nature of geothermal energy and energy investments overall. The study introduces a stochastic geothermal cost model and a valuation approach for different geothermal power plant development scenarios. The Monte Carlo simulation technique is employed to obtain probability distributions of geothermal energy development costs and project net present values. In the study a stochastic cost model with incorporated dependence structure is defined and compared with the model where random variables are modeled as independent inputs. One of the goals of the study is to attempt to shed light on the long-standing modeling problem of dependence modeling between random input variables. The dependence between random input variables will be modeled by employing the method of copulas. The study focuses on four main types of geothermal power generation technologies and introduces a stochastic levelized cost model for each technology. Moreover, we also compare the levelized costs of natural gas combined cycle and coal-fired power plants with geothermal power plants. The input data used in the model relies on the cost data recently reported by government agencies and non-profit organizations, such as the Department of Energy, National Laboratories, California Energy Commission and Geothermal Energy Association. The second part of the study introduces the stochastic discounted cash flow valuation model for the geothermal technologies analyzed in the first phase. In this phase of the study, the Integrated Planning Model (IPM) software was used to forecast the revenue streams of geothermal assets under different price and regulation scenarios. These results are then combined to create a stochastic revenue forecast of the power plants. The uncertainties in gas prices and environmental regulations will be modeled and their potential impacts will be captured in the valuation model. Finally, the study will compare the probability distributions of development cost and project value and discusses the market penetration potential of the geothermal power generation. There is a recent world wide interest in geothermal utilization projects. There are several reasons for the recent popularity of geothermal energy, including the increasing volatility of fossil fuel prices, need for domestic energy sources, approaching carbon emission limitations and state renewable energy standards, increasing need for baseload units, and new technology to make geothermal energy more attractive for power generation. It is our hope that this study will contribute to the recent progress of geothermal energy by shedding light on the uncertainty of geothermal energy project costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
How input fluctuations reshape the dynamics of a biological switching system
NASA Astrophysics Data System (ADS)
Hu, Bo; Kessler, David A.; Rappel, Wouter-Jan; Levine, Herbert
2012-12-01
An important task in quantitative biology is to understand the role of stochasticity in biochemical regulation. Here, as an extension of our recent work [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.148101 107, 148101 (2011)], we study how input fluctuations affect the stochastic dynamics of a simple biological switch. In our model, the on transition rate of the switch is directly regulated by a noisy input signal, which is described as a non-negative mean-reverting diffusion process. This continuous process can be a good approximation of the discrete birth-death process and is much more analytically tractable. Within this setup, we apply the Feynman-Kac theorem to investigate the statistical features of the output switching dynamics. Consistent with our previous findings, the input noise is found to effectively suppress the input-dependent transitions. We show analytically that this effect becomes significant when the input signal fluctuates greatly in amplitude and reverts slowly to its mean.
Bidirectional Classical Stochastic Processes with Measurements and Feedback
NASA Technical Reports Server (NTRS)
Hahne, G. E.
2005-01-01
A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
A hierarchical stress release model for synthetic seismicity
NASA Astrophysics Data System (ADS)
Bebbington, Mark
1997-06-01
We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to use the model for forward prediction.
NASA Astrophysics Data System (ADS)
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
Characteristic operator functions for quantum input-plant-output models and coherent control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gough, John E.
We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entriesmore » that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.« less
An imaging-based stochastic model for simulation of tumour vasculature
NASA Astrophysics Data System (ADS)
Adhikarla, Vikram; Jeraj, Robert
2012-10-01
A mathematical model which reconstructs the structure of existing vasculature using patient-specific anatomical, functional and molecular imaging as input was developed. The vessel structure is modelled according to empirical vascular parameters, such as the mean vessel branching angle. The model is calibrated such that the resultant oxygen map modelled from the simulated microvasculature stochastically matches the input oxygen map to a high degree of accuracy (R2 ≈ 1). The calibrated model was successfully applied to preclinical imaging data. Starting from the anatomical vasculature image (obtained from contrast-enhanced computed tomography), a representative map of the complete vasculature was stochastically simulated as determined by the oxygen map (obtained from hypoxia [64Cu]Cu-ATSM positron emission tomography). The simulated microscopic vasculature and the calculated oxygenation map successfully represent the imaged hypoxia distribution (R2 = 0.94). The model elicits the parameters required to simulate vasculature consistent with imaging and provides a key mathematical relationship relating the vessel volume to the tissue oxygen tension. Apart from providing an excellent framework for visualizing the imaging gap between the microscopic and macroscopic imagings, the model has the potential to be extended as a tool to study the dynamics between the tumour and the vasculature in a patient-specific manner and has an application in the simulation of anti-angiogenic therapies.
Goychuk, I
2001-08-01
Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Detailed numerical investigation of the dissipative stochastic mechanics based neuron model.
Güler, Marifi
2008-10-01
Recently, a physical approach for the description of neuronal dynamics under the influence of ion channel noise was proposed in the realm of dissipative stochastic mechanics (Güler, Phys Rev E 76:041918, 2007). Led by the presence of a multiple number of gates in an ion channel, the approach establishes a viewpoint that ion channels are exposed to two kinds of noise: the intrinsic noise, associated with the stochasticity in the movement of gating particles between the inner and the outer faces of the membrane, and the topological noise, associated with the uncertainty in accessing the permissible topological states of open gates. Renormalizations of the membrane capacitance and of a membrane voltage dependent potential function were found to arise from the mutual interaction of the two noisy systems. The formalism therein was scrutinized using a special membrane with some tailored properties giving the Rose-Hindmarsh dynamics in the deterministic limit. In this paper, the resultant computational neuron model of the above approach is investigated in detail numerically for its dynamics using time-independent input currents. The following are the major findings obtained. The intrinsic noise gives rise to two significant coexisting effects: it initiates spiking activity even in some range of input currents for which the corresponding deterministic model is quiet and causes bursting in some other range of input currents for which the deterministic model fires tonically. The renormalization corrections are found to augment the above behavioral transitions from quiescence to spiking and from tonic firing to bursting, and, therefore, the bursting activity is found to take place in a wider range of input currents for larger values of the correction coefficients. Some findings concerning the diffusive behavior in the voltage space are also reported.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less
NASA Astrophysics Data System (ADS)
Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel
2017-04-01
Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.
NASA Astrophysics Data System (ADS)
Ramos, José A.; Mercère, Guillaume
2016-12-01
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA
NASA Astrophysics Data System (ADS)
Khodabakhshi, Mohammad
2009-08-01
This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.
Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2006-01-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2007-06-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Liberti, M; Paffi, A; Maggio, F; De Angelis, A; Apollonio, F; d'Inzeo, G
2009-01-01
A number of experimental investigations have evidenced the extraordinary sensitivity of neuronal cells to weak input stimulations, including electromagnetic (EM) fields. Moreover, it has been shown that biological noise, due to random channels gating, acts as a tuning factor in neuronal processing, according to the stochastic resonant (SR) paradigm. In this work the attention is focused on noise arising from the stochastic gating of ionic channels in a model of Ranvier node of acoustic fibers. The small number of channels gives rise to a high noise level, which is able to cause a spike train generation even in the absence of stimulations. A SR behavior has been observed in the model for the detection of sinusoidal signals at frequencies typical of the speech.
Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Aziz, H M Abdul; Young, Stan
Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Jakeman, John; Gittelson, Claude
2015-01-08
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less
Population density equations for stochastic processes with memory kernels
NASA Astrophysics Data System (ADS)
Lai, Yi Ming; de Kamps, Marc
2017-06-01
We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.
Thermodynamic efficiency of learning a rule in neural networks
NASA Astrophysics Data System (ADS)
Goldt, Sebastian; Seifert, Udo
2017-11-01
Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less
NASA Astrophysics Data System (ADS)
Bukoski, Alex; Steyn-Ross, D. A.; Pickett, Ashley F.; Steyn-Ross, Moira L.
2018-06-01
The dynamics of a stochastic type-I Hodgkin-Huxley-like point neuron model exposed to inhibitory synaptic noise are investigated as a function of distance from spiking threshold and the inhibitory influence of the general anesthetic agent propofol. The model is biologically motivated and includes the effects of intrinsic ion-channel noise via a stochastic differential equation description as well as inhibitory synaptic noise modeled as multiple Poisson-distributed impulse trains with saturating response functions. The effect of propofol on these synapses is incorporated through this drug's principal influence on fast inhibitory neurotransmission mediated by γ -aminobutyric acid (GABA) type-A receptors via reduction of the synaptic response decay rate. As the neuron model approaches spiking threshold from below, we track membrane voltage fluctuation statistics of numerically simulated stochastic trajectories. We find that for a given distance from spiking threshold, increasing the magnitude of anesthetic-induced inhibition is associated with augmented signatures of critical slowing: fluctuation amplitudes and correlation times grow as spectral power is increasingly focused at 0 Hz. Furthermore, as a function of distance from threshold, anesthesia significantly modifies the power-law exponents for variance and correlation time divergences observable in stochastic trajectories. Compared to the inverse square root power-law scaling of these quantities anticipated for the saddle-node bifurcation of type-I neurons in the absence of anesthesia, increasing anesthetic-induced inhibition results in an observable exponent <-0.5 for variance and >-0.5 for correlation time divergences. However, these behaviors eventually break down as distance from threshold goes to zero with both the variance and correlation time converging to common values independent of anesthesia. Compared to the case of no synaptic input, linearization of an approximating multivariate Ornstein-Uhlenbeck model reveals these effects to be the consequence of an additional slow eigenvalue associated with synaptic activity that competes with those of the underlying point neuron in a manner that depends on distance from spiking threshold.
NASA Astrophysics Data System (ADS)
Zhao, Wencai; Li, Juan; Zhang, Tongqian; Meng, Xinzhu; Zhang, Tonghua
2017-07-01
Taking into account of both white and colored noises, a stochastic mathematical model with impulsive toxicant input is formulated. Based on this model, we investigate dynamics, such as the persistence and ergodicity, of plant infectious disease model with Markov conversion in a polluted environment. The thresholds of extinction and persistence in mean are obtained. By using Lyapunov functions, we prove that the system is ergodic and has a stationary distribution under certain sufficient conditions. Finally, numerical simulations are employed to illustrate our theoretical analysis.
The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process
NASA Astrophysics Data System (ADS)
Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko
2012-06-01
A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
A simple model of bipartite cooperation for ecological and organizational networks.
Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian
2009-01-22
In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Uncertainty quantification for personalized analyses of human proximal femurs.
Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar
2016-02-29
Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Coronal heating by stochastic magnetic pumping
NASA Technical Reports Server (NTRS)
Sturrock, P. A.; Uchida, Y.
1980-01-01
Recent observational data cast serious doubt on the widely held view that the Sun's corona is heated by traveling waves (acoustic or magnetohydrodynamic). It is proposed that the energy responsible for heating the corona is derived from the free energy of the coronal magnetic field derived from motion of the 'feet' of magnetic field lines in the photosphere. Stochastic motion of the feet of magnetic field lines leads, on the average, to a linear increase of magnetic free energy with time. This rate of energy input is calculated for a simple model of a single thin flux tube. The model appears to agree well with observational data if the magnetic flux originates in small regions of high magnetic field strength. On combining this energy input with estimates of energy loss by radiation and of energy redistribution by thermal conduction, we obtain scaling laws for density and temperature in terms of length and coronal magnetic field strength.
Löwe, Roland; Mikkelsen, Peter Steen; Rasmussen, Michael R; Madsen, Henrik
2013-01-01
Merging of radar rainfall data with rain gauge measurements is a common approach to overcome problems in deriving rain intensities from radar measurements. We extend an existing approach for adjustment of C-band radar data using state-space models and use the resulting rainfall intensities as input for forecasting outflow from two catchments in the Copenhagen area. Stochastic grey-box models are applied to create the runoff forecasts, providing us with not only a point forecast but also a quantification of the forecast uncertainty. Evaluating the results, we can show that using the adjusted radar data improves runoff forecasts compared with using the original radar data and that rain gauge measurements as forecast input are also outperformed. Combining the data merging approach with short-term rainfall forecasting algorithms may result in further improved runoff forecasts that can be used in real time control.
Mejlholm, Ole; Bøknæs, Niels; Dalgaard, Paw
2015-02-01
A new stochastic model for the simultaneous growth of Listeria monocytogenes and lactic acid bacteria (LAB) was developed and validated on data from naturally contaminated samples of cold-smoked Greenland halibut (CSGH) and cold-smoked salmon (CSS). During industrial processing these samples were added acetic and/or lactic acids. The stochastic model was developed from an existing deterministic model including the effect of 12 environmental parameters and microbial interaction (O. Mejlholm and P. Dalgaard, Food Microbiology, submitted for publication). Observed maximum population density (MPD) values of L. monocytogenes in naturally contaminated samples of CSGH and CSS were accurately predicted by the stochastic model based on measured variability in product characteristics and storage conditions. Results comparable to those from the stochastic model were obtained, when product characteristics of the least and most preserved sample of CSGH and CSS were used as input for the existing deterministic model. For both modelling approaches, it was shown that lag time and the effect of microbial interaction needs to be included to accurately predict MPD values of L. monocytogenes. Addition of organic acids to CSGH and CSS was confirmed as a suitable mitigation strategy against the risk of growth by L. monocytogenes as both types of products were in compliance with the EU regulation on ready-to-eat foods. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-01-01
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity). PMID:25976626
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity.
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-05-15
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity
NASA Astrophysics Data System (ADS)
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-05-01
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).
Stochastic approach to the derivation of emission limits for wastewater treatment plants.
Stransky, D; Kabelkova, I; Bares, V
2009-01-01
Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
A methodology was developed a assess the upset susceptibility/reliability of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general purpose microprocessor were studied. The upset tests involved the random input of analog transients which model lightning induced signals onto interface lines of an 8080 based microcomputer from which upset error data was recorded. The program code on the microprocessor during tests is designed to exercise all of the machine cycles and memory addressing techniques implemented in the 8080 central processing unit. A statistical analysis is presented in which possible correlations are established between the probability of upset occurrence and transient signal inputs during specific processing states and operations. A stochastic upset susceptibility model for the 8080 microprocessor is presented. The susceptibility of this microprocessor to upset, once analog transients have entered the system, is determined analytically by calculating the state probabilities of the stochastic model.
Cairoli, Andrea; Piovani, Duccio; Jensen, Henrik Jeldtoft
2014-12-31
We propose a new procedure to monitor and forecast the onset of transitions in high-dimensional complex systems. We describe our procedure by an application to the tangled nature model of evolutionary ecology. The quasistable configurations of the full stochastic dynamics are taken as input for a stability analysis by means of the deterministic mean-field equations. Numerical analysis of the high-dimensional stability matrix allows us to identify unstable directions associated with eigenvalues with a positive real part. The overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean-field approximation is found to be a good early warning of the transitions occurring intermittently.
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
On the probabilistic structure of water age
NASA Astrophysics Data System (ADS)
Porporato, Amilcare; Calabrese, Salvatore
2015-05-01
The age distribution of water in hydrologic systems has received renewed interest recently, especially in relation to watershed response to rainfall inputs. The purpose of this contribution is first to draw attention to existing theories of age distributions in population dynamics, fluid mechanics and stochastic groundwater, and in particular to the McKendrick-von Foerster equation and its generalizations and solutions. A second and more important goal is to clarify that, when hydrologic fluxes are modeled by means of time-varying stochastic processes, the age distributions must themselves be treated as random functions. Once their probabilistic structure is obtained, it can be used to characterize the variability of age distributions in real systems and thus help quantify the inherent uncertainty in the field determination of water age. We illustrate these concepts with reference to a stochastic storage model, which has been used as a minimalist model of soil moisture and streamflow dynamics.
J. Alan Yeakley; Ron A. Moen; David D. Breshears; Martha K. Nungesser
1994-01-01
Ecosystem models typically use input temperature and precipitation data generated stochastically from weather station means and variances. Although the weather station data are based on measurements taken over a few decades, model simulations are usually on the order of centuries. Consequently, observed periodicities in temperature and precipitation at the continental...
ERIC Educational Resources Information Center
Sillah, B. M. S.
2012-01-01
This paper employs a stochastic production frontier model to assess the efficiency of the senior secondary schools in the Gambia. It examines their efficiency in using and mixing the educational inputs of average teacher salary, average teacher education, average teacher experience and students-to-teacher ratio in producing the number of students…
Clinical Applications of Stochastic Dynamic Models of the Brain, Part I: A Primer.
Roberts, James A; Friston, Karl J; Breakspear, Michael
2017-04-01
Biological phenomena arise through interactions between an organism's intrinsic dynamics and stochastic forces-random fluctuations due to external inputs, thermal energy, or other exogenous influences. Dynamic processes in the brain derive from neurophysiology and anatomical connectivity; stochastic effects arise through sensory fluctuations, brainstem discharges, and random microscopic states such as thermal noise. The dynamic evolution of systems composed of both dynamic and random effects can be studied with stochastic dynamic models (SDMs). This article, Part I of a two-part series, offers a primer of SDMs and their application to large-scale neural systems in health and disease. The companion article, Part II, reviews the application of SDMs to brain disorders. SDMs generate a distribution of dynamic states, which (we argue) represent ideal candidates for modeling how the brain represents states of the world. When augmented with variational methods for model inversion, SDMs represent a powerful means of inferring neuronal dynamics from functional neuroimaging data in health and disease. Together with deeper theoretical considerations, this work suggests that SDMs will play a unique and influential role in computational psychiatry, unifying empirical observations with models of perception and behavior. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed E. Hassan
2006-01-24
Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less
Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2016-11-01
Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
Parallel stochastic simulation of macroscopic calcium currents.
González-Vélez, Virginia; González-Vélez, Horacio
2007-06-01
This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.
Restoring the encoding properties of a stochastic neuron model by an exogenous noise
Paffi, Alessandra; Camera, Francesca; Apollonio, Francesca; d'Inzeo, Guglielmo; Liberti, Micaela
2015-01-01
Here we evaluate the possibility of improving the encoding properties of an impaired neuronal system by superimposing an exogenous noise to an external electric stimulation signal. The approach is based on the use of mathematical neuron models consisting of stochastic HH-like circuit, where the impairment of the endogenous presynaptic inputs is described as a subthreshold injected current and the exogenous stimulation signal is a sinusoidal voltage perturbation across the membrane. Our results indicate that a correlated Gaussian noise, added to the sinusoidal signal can significantly increase the encoding properties of the impaired system, through the Stochastic Resonance (SR) phenomenon. These results suggest that an exogenous noise, suitably tailored, could improve the efficacy of those stimulation techniques used in neuronal systems, where the presynaptic sensory neurons are impaired and have to be artificially bypassed. PMID:25999845
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
Stochastic methods for analysis of power flow in electric networks
NASA Astrophysics Data System (ADS)
1982-09-01
The modeling and effects of probabilistic behavior on steady state power system operation were analyzed. A solution to the steady state network flow equations which adhere both to Kirchoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques was obtained. The development of sound techniques for producing meaningful data to serve as input is examined. Electric demand modeling, equipment failure analysis, and algorithm development are investigated. Two major development areas are described: a decomposition of stochastic processes which gives stationarity, ergodicity, and even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
The Ising Decision Maker: a binary stochastic network for choice response time.
Verdonck, Stijn; Tuerlinckx, Francis
2014-07-01
The Ising Decision Maker (IDM) is a new formal model for speeded two-choice decision making derived from the stochastic Hopfield network or dynamic Ising model. On a microscopic level, it consists of 2 pools of binary stochastic neurons with pairwise interactions. Inside each pool, neurons excite each other, whereas between pools, neurons inhibit each other. The perceptual input is represented by an external excitatory field. Using methods from statistical mechanics, the high-dimensional network of neurons (microscopic level) is reduced to a two-dimensional stochastic process, describing the evolution of the mean neural activity per pool (macroscopic level). The IDM can be seen as an abstract, analytically tractable multiple attractor network model of information accumulation. In this article, the properties of the IDM are studied, the relations to existing models are discussed, and it is shown that the most important basic aspects of two-choice response time data can be reproduced. In addition, the IDM is shown to predict a variety of observed psychophysical relations such as Piéron's law, the van der Molen-Keuss effect, and Weber's law. Using Bayesian methods, the model is fitted to both simulated and real data, and its performance is compared to the Ratcliff diffusion model. (c) 2014 APA, all rights reserved.
Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.
2015-01-01
Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850
Benefits of an ultra large and multiresolution ensemble for estimating available wind power
NASA Astrophysics Data System (ADS)
Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik
2016-04-01
In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
Developing stochastic model of thrust and flight dynamics for small UAVs
NASA Astrophysics Data System (ADS)
Tjhai, Chandra
This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.
NASA Astrophysics Data System (ADS)
Kozel, Tomas; Stary, Milos
2017-12-01
The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for all numbers from interval. Resulted course of management was compared with course, which was obtained from using GE + real flow series. Comparing results showed that fuzzy model with forecasted values has been able to manage main malfunction and artificially disorders made by model were founded essential, after values of water volume during management were evaluated. Forecasting model in combination with fuzzy model provide very good results in management of water reservoir with storage function and can be recommended for this purpose.
Noise-induced escape in an excitable system
NASA Astrophysics Data System (ADS)
Khovanov, I. A.; Polovinkin, A. V.; Luchinsky, D. G.; McClintock, P. V. E.
2013-03-01
We consider the stochastic dynamics of escape in an excitable system, the FitzHugh-Nagumo (FHN) neuronal model, for different classes of excitability. We discuss, first, the threshold structure of the FHN model as an example of a system without a saddle state. We then develop a nonlinear (nonlocal) stability approach based on the theory of large fluctuations, including a finite-noise correction, to describe noise-induced escape in the excitable regime. We show that the threshold structure is revealed via patterns of most probable (optimal) fluctuational paths. The approach allows us to estimate the escape rate and the exit location distribution. We compare the responses of a monostable resonator and monostable integrator to stochastic input signals and to a mixture of periodic and stochastic stimuli. Unlike the commonly used local analysis of the stable state, our nonlocal approach based on optimal paths yields results that are in good agreement with direct numerical simulations of the Langevin equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830
Evaluating growth of the Porcupine Caribou Herd using a stochastic model
Walsh, Noreen E.; Griffith, Brad; McCabe, Thomas R.
1995-01-01
Estimates of the relative effects of demographic parameters on population rates of change, and of the level of natural variation in these parameters, are necessary to address potential effects of perturbations on populations. We used a stochastic model, based on survival and reproduction estimates of the Porcupine Caribou (Rangifer tarandus granti) Herd (PCH), during 1983-89 and 1989-92 to obtain distributions of potential population rates of change (r). The distribution of r produced by 1,000 trajectories of our simulation model (1983-89, r̄ = 0.013; 1989-92, r̄ = 0.003) encompassed the rate of increase calculated from an independent series of photo-survey data over the same years (1983-89, r = 0.048; 1989-92, r = -0.035). Changes in adult female survival had the largest effect on r, followed by changes in calf survival. We hypothesized that petroleum development on calving grounds, or changes in calving and post-calving habitats due to global climate change, would affect model input parameters. A decline in annual adult female survival from 0.871 to 0.847, or a decline in annual calf survival from 0.518 to 0.472, would be sufficient to cause a declining population, if all other input estimates remained the same. We then used these lower survival rates, in conjunction with our estimated amount of among-year variation, to determine a range of resulting population trajectories. Stochastic models can be used to better understand dynamics of populations, optimize sampling investment, and evaluate potential effects of various factors on population growth.
Non-Gaussian, non-dynamical stochastic resonance
NASA Astrophysics Data System (ADS)
Szczepaniec, Krzysztof; Dybiec, Bartłomiej
2013-11-01
The classical model revealing stochastic resonance is a motion of an overdamped particle in a double-well fourth order potential when combined action of noise and external periodic driving results in amplifying of weak signals. Resonance behavior can also be observed in non-dynamical systems. The simplest example is a threshold triggered device. It consists of a periodic modulated input and noise. Every time an output crosses the threshold the signal is recorded. Such a digitally filtered signal is sensitive to the noise intensity. There exists the optimal value of the noise intensity resulting in the "most" periodic output. Here, we explore properties of the non-dynamical stochastic resonance in non-equilibrium situations, i.e. when the Gaussian noise is replaced by an α-stable noise. We demonstrate that non-equilibrium α-stable noises, depending on noise parameters, can either weaken or enhance the non-dynamical stochastic resonance.
On the probabilistic structure of water age: Probabilistic Water Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porporato, Amilcare; Calabrese, Salvatore
We report the age distribution of water in hydrologic systems has received renewed interest recently, especially in relation to watershed response to rainfall inputs. The purpose of this contribution is first to draw attention to existing theories of age distributions in population dynamics, fluid mechanics and stochastic groundwater, and in particular to the McKendrick-von Foerster equation and its generalizations and solutions. A second and more important goal is to clarify that, when hydrologic fluxes are modeled by means of time-varying stochastic processes, the age distributions must themselves be treated as random functions. Once their probabilistic structure is obtained, it canmore » be used to characterize the variability of age distributions in real systems and thus help quantify the inherent uncertainty in the field determination of water age. Finally, we illustrate these concepts with reference to a stochastic storage model, which has been used as a minimalist model of soil moisture and streamflow dynamics.« less
On the probabilistic structure of water age: Probabilistic Water Age
Porporato, Amilcare; Calabrese, Salvatore
2015-04-23
We report the age distribution of water in hydrologic systems has received renewed interest recently, especially in relation to watershed response to rainfall inputs. The purpose of this contribution is first to draw attention to existing theories of age distributions in population dynamics, fluid mechanics and stochastic groundwater, and in particular to the McKendrick-von Foerster equation and its generalizations and solutions. A second and more important goal is to clarify that, when hydrologic fluxes are modeled by means of time-varying stochastic processes, the age distributions must themselves be treated as random functions. Once their probabilistic structure is obtained, it canmore » be used to characterize the variability of age distributions in real systems and thus help quantify the inherent uncertainty in the field determination of water age. Finally, we illustrate these concepts with reference to a stochastic storage model, which has been used as a minimalist model of soil moisture and streamflow dynamics.« less
Predictions of Experimentally Observed Stochastic Ground Vibrations Induced by Blasting
Kostić, Srđan; Perc, Matjaž; Vasović, Nebojša; Trajković, Slobodan
2013-01-01
In the present paper, we investigate the blast induced ground motion recorded at the limestone quarry “Suva Vrela” near Kosjerić, which is located in the western part of Serbia. We examine the recorded signals by means of surrogate data methods and a determinism test, in order to determine whether the recorded ground velocity is stochastic or deterministic in nature. Longitudinal, transversal and the vertical ground motion component are analyzed at three monitoring points that are located at different distances from the blasting source. The analysis reveals that the recordings belong to a class of stationary linear stochastic processes with Gaussian inputs, which could be distorted by a monotonic, instantaneous, time-independent nonlinear function. Low determinism factors obtained with the determinism test further confirm the stochastic nature of the recordings. Guided by the outcome of time series analysis, we propose an improved prediction model for the peak particle velocity based on a neural network. We show that, while conventional predictors fail to provide acceptable prediction accuracy, the neural network model with four main blast parameters as input, namely total charge, maximum charge per delay, distance from the blasting source to the measuring point, and hole depth, delivers significantly more accurate predictions that may be applicable on site. We also perform a sensitivity analysis, which reveals that the distance from the blasting source has the strongest influence on the final value of the peak particle velocity. This is in full agreement with previous observations and theory, thus additionally validating our methodology and main conclusions. PMID:24358140
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
Considering inventory distributions in a stochastic periodic inventory routing system
NASA Astrophysics Data System (ADS)
Yadollahi, Ehsan; Aghezzaf, El-Houssaine
2017-07-01
Dealing with the stochasticity of parameters is one of the critical issues in business and industry nowadays. Supply chain planners have difficulties in forecasting stochastic parameters of a distribution system. Demand rates of customers during their lead time are one of these parameters. In addition, holding a huge level of inventory at the retailers is costly and inefficient. To cover the uncertainty of forecasting demand rates, researchers have proposed the usage of safety stock to avoid stock-out. However, finding the precise level of safety stock depends on forecasting the statistical distribution of demand rates and their variations in different settings among the planning horizon. In this paper the demand rate distributions and its parameters are taken into account for each time period in a stochastic periodic IRP. An analysis of the achieved statistical distribution of the inventory and safety stock level is provided to measure the effects of input parameters on the output indicators. Different values for coefficient of variation are applied to the customers' demand rate in the optimization model. The outcome of the deterministic equivalent model of SPIRP is simulated in form of an illustrative case.
NASA Astrophysics Data System (ADS)
Havaej, Mohsen; Coggan, John; Stead, Doug; Elmo, Davide
2016-04-01
Rock slope geometry and discontinuity properties are among the most important factors in realistic rock slope analysis yet they are often oversimplified in numerical simulations. This is primarily due to the difficulties in obtaining accurate structural and geometrical data as well as the stochastic representation of discontinuities. Recent improvements in both digital data acquisition and incorporation of discrete fracture network data into numerical modelling software have provided better tools to capture rock mass characteristics, slope geometries and digital terrain models allowing more effective modelling of rock slopes. Advantages of using improved data acquisition technology include safer and faster data collection, greater areal coverage, and accurate data geo-referencing far exceed limitations due to orientation bias and occlusion. A key benefit of a detailed point cloud dataset is the ability to measure and evaluate discontinuity characteristics such as orientation, spacing/intensity and persistence. This data can be used to develop a discrete fracture network which can be imported into the numerical simulations to study the influence of the stochastic nature of the discontinuities on the failure mechanism. We demonstrate the application of digital terrestrial photogrammetry in discontinuity characterization and distinct element simulations within a slate quarry. An accurately geo-referenced photogrammetry model is used to derive the slope geometry and to characterize geological structures. We first show how a discontinuity dataset, obtained from a photogrammetry model can be used to characterize discontinuities and to develop discrete fracture networks. A deterministic three-dimensional distinct element model is then used to investigate the effect of some key input parameters (friction angle, spacing and persistence) on the stability of the quarry slope model. Finally, adopting a stochastic approach, discrete fracture networks are used as input for 3D distinct element simulations to better understand the stochastic nature of the geological structure and its effect on the quarry slope failure mechanism. The numerical modelling results highlight the influence of discontinuity characteristics and kinematics on the slope failure mechanism and the variability in the size and shape of the failed blocks.
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
Sequential use of simulation and optimization in analysis and planning
Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones
2000-01-01
Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...
The development of the deterministic nonlinear PDEs in particle physics to stochastic case
NASA Astrophysics Data System (ADS)
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2018-06-01
In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
MONALISA for stochastic simulations of Petri net models of biochemical systems.
Balazki, Pavel; Lindauer, Klaus; Einloft, Jens; Ackermann, Jörg; Koch, Ina
2015-07-10
The concept of Petri nets (PN) is widely used in systems biology and allows modeling of complex biochemical systems like metabolic systems, signal transduction pathways, and gene expression networks. In particular, PN allows the topological analysis based on structural properties, which is important and useful when quantitative (kinetic) data are incomplete or unknown. Knowing the kinetic parameters, the simulation of time evolution of such models can help to study the dynamic behavior of the underlying system. If the number of involved entities (molecules) is low, a stochastic simulation should be preferred against the classical deterministic approach of solving ordinary differential equations. The Stochastic Simulation Algorithm (SSA) is a common method for such simulations. The combination of the qualitative and semi-quantitative PN modeling and stochastic analysis techniques provides a valuable approach in the field of systems biology. Here, we describe the implementation of stochastic analysis in a PN environment. We extended MONALISA - an open-source software for creation, visualization and analysis of PN - by several stochastic simulation methods. The simulation module offers four simulation modes, among them the stochastic mode with constant firing rates and Gillespie's algorithm as exact and approximate versions. The simulator is operated by a user-friendly graphical interface and accepts input data such as concentrations and reaction rate constants that are common parameters in the biological context. The key features of the simulation module are visualization of simulation, interactive plotting, export of results into a text file, mathematical expressions for describing simulation parameters, and up to 500 parallel simulations of the same parameter sets. To illustrate the method we discuss a model for insulin receptor recycling as case study. We present a software that combines the modeling power of Petri nets with stochastic simulation of dynamic processes in a user-friendly environment supported by an intuitive graphical interface. The program offers a valuable alternative to modeling, using ordinary differential equations, especially when simulating single-cell experiments with low molecule counts. The ability to use mathematical expressions provides an additional flexibility in describing the simulation parameters. The open-source distribution allows further extensions by third-party developers. The software is cross-platform and is licensed under the Artistic License 2.0.
NASA Technical Reports Server (NTRS)
Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
NASA Astrophysics Data System (ADS)
Keller, J. Y.; Chabir, K.; Sauter, D.
2016-03-01
State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-11-01
This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
Direct connections assist neurons to detect correlation in small amplitude noises
Bolhasani, E.; Azizi, Y.; Valizadeh, A.
2013-01-01
We address a question on the effect of common stochastic inputs on the correlation of the spike trains of two neurons when they are coupled through direct connections. We show that the change in the correlation of small amplitude stochastic inputs can be better detected when the neurons are connected by direct excitatory couplings. Depending on whether intrinsic firing rate of the neurons is identical or slightly different, symmetric or asymmetric connections can increase the sensitivity of the system to the input correlation by changing the mean slope of the correlation transfer function over a given range of input correlation. In either case, there is also an optimum value for synaptic strength which maximizes the sensitivity of the system to the changes in input correlation. PMID:23966940
Shiau, LieJune; Schwalger, Tilo; Lindner, Benjamin
2015-06-01
We study the spike statistics of an adaptive exponential integrate-and-fire neuron stimulated by white Gaussian current noise. We derive analytical approximations for the coefficient of variation and the serial correlation coefficient of the interspike interval assuming that the neuron operates in the mean-driven tonic firing regime and that the stochastic input is weak. Our result for the serial correlation coefficient has the form of a geometric sequence and is confirmed by the comparison to numerical simulations. The theory predicts various patterns of interval correlations (positive or negative at lag one, monotonically decreasing or oscillating) depending on the strength of the spike-triggered and subthreshold components of the adaptation current. In particular, for pure subthreshold adaptation we find strong positive ISI correlations that are usually ascribed to positive correlations in the input current. Our results i) provide an alternative explanation for interspike-interval correlations observed in vivo, ii) may be useful in fitting point neuron models to experimental data, and iii) may be instrumental in exploring the role of adaptation currents for signal detection and signal transmission in single neurons.
Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection
Zhang, Haibin; He, Qingbo; Kong, Fanrang
2015-01-01
Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system. PMID:26343662
Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection.
Zhang, Haibin; He, Qingbo; Kong, Fanrang
2015-08-28
Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu
2017-04-01
In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less
An approach to the drone fleet survivability assessment based on a stochastic continues-time model
NASA Astrophysics Data System (ADS)
Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos
2017-09-01
An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.
NASA Astrophysics Data System (ADS)
Lovvorn, James R.; Jacob, Ute; North, Christopher A.; Kolts, Jason M.; Grebmeier, Jacqueline M.; Cooper, Lee W.; Cui, Xuehua
2015-03-01
Network models can help generate testable predictions and more accurate projections of food web responses to environmental change. Such models depend on predator-prey interactions throughout the network. When a predator currently consumes all of its prey's production, the prey's biomass may change substantially with loss of the predator or invasion by others. Conversely, if production of deposit-feeding prey is limited by organic matter inputs, system response may be predictable from models of primary production. For sea floor communities of shallow Arctic seas, increased temperature could lead to invasion or loss of predators, while reduced sea ice or change in wind-driven currents could alter organic matter inputs. Based on field data and models for three different sectors of the northern Bering Sea, we found a number of cases where all of a prey's production was consumed but the taxa involved varied among sectors. These differences appeared not to result from numerical responses of predators to abundance of preferred prey. Rather, they appeared driven by stochastic variations in relative biomass among taxa, due largely to abiotic conditions that affect colonization and early post-larval survival. Oscillatory tendencies of top-down versus bottom-up interactions may augment these variations. Required inputs of settling microalgae exceeded existing estimates of annual primary production by 50%; thus, assessing limits to bottom-up control depends on better corrections of satellite estimates to account for production throughout the water column. Our results suggest that in this Arctic system, stochastic abiotic conditions outweigh deterministic species interactions in food web responses to a varying environment.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
McNamara, C; Naddy, B; Rohan, D; Sexton, J
2003-10-01
The Monte Carlo computational system for stochastic modelling of dietary exposure to food chemicals and nutrients is presented. This system was developed through a European Commission-funded research project. It is accessible as a Web-based application service. The system allows and supports very significant complexity in the data sets used as the model input, but provides a simple, general purpose, linear kernel for model evaluation. Specific features of the system include the ability to enter (arbitrarily) complex mathematical or probabilistic expressions at each and every input data field, automatic bootstrapping on subjects and on subject food intake diaries, and custom kernels to apply brand information such as market share and loyalty to the calculation of food and chemical intake.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
Effluent trading in river systems through stochastic decision-making process: a case study.
Zolfagharipoor, Mohammad Amin; Ahmadi, Azadeh
2017-09-01
The objective of this paper is to provide an efficient framework for effluent trading in river systems. The proposed framework consists of two pessimistic and optimistic decision-making models to increase the executability of river water quality trading programs. The models used for this purpose are (1) stochastic fallback bargaining (SFB) to reach an agreement among wastewater dischargers and (2) stochastic multi-criteria decision-making (SMCDM) to determine the optimal treatment strategy. The Monte-Carlo simulation method is used to incorporate the uncertainty into analysis. This uncertainty arises from stochastic nature and the errors in the calculation of wastewater treatment costs. The results of river water quality simulation model are used as the inputs of models. The proposed models are used in a case study on the Zarjoub River in northern Iran to determine the best solution for the pollution load allocation. The best treatment alternatives selected by each model are imported, as the initial pollution discharge permits, into an optimization model developed for trading of pollution discharge permits among pollutant sources. The results show that the SFB-based water pollution trading approach reduces the costs by US$ 14,834 while providing a relative consensus among pollutant sources. Meanwhile, the SMCDM-based water pollution trading approach reduces the costs by US$ 218,852, but it is less acceptable by pollutant sources. Therefore, it appears that giving due attention to stability, or in other words acceptability of pollution trading programs for all pollutant sources, is an essential element of their success.
Burkitt, A N
2006-08-01
The integrate-and-fire neuron model describes the state of a neuron in terms of its membrane potential, which is determined by the synaptic inputs and the injected current that the neuron receives. When the membrane potential reaches a threshold, an action potential (spike) is generated. This review considers the model in which the synaptic input varies periodically and is described by an inhomogeneous Poisson process, with both current and conductance synapses. The focus is on the mathematical methods that allow the output spike distribution to be analyzed, including first passage time methods and the Fokker-Planck equation. Recent interest in the response of neurons to periodic input has in part arisen from the study of stochastic resonance, which is the noise-induced enhancement of the signal-to-noise ratio. Networks of integrate-and-fire neurons behave in a wide variety of ways and have been used to model a variety of neural, physiological, and psychological phenomena. The properties of the integrate-and-fire neuron model with synaptic input described as a temporally homogeneous Poisson process are reviewed in an accompanying paper (Burkitt in Biol Cybern, 2006).
Adaptive Neural Tracking Control for Switched High-Order Stochastic Nonlinear Systems.
Zhao, Xudong; Wang, Xinyong; Zong, Guangdeng; Zheng, Xiaolong
2017-10-01
This paper deals with adaptive neural tracking control design for a class of switched high-order stochastic nonlinear systems with unknown uncertainties and arbitrary deterministic switching. The considered issues are: 1) completely unknown uncertainties; 2) stochastic disturbances; and 3) high-order nonstrict-feedback system structure. The considered mathematical models can represent many practical systems in the actual engineering. By adopting the approximation ability of neural networks, common stochastic Lyapunov function method together with adding an improved power integrator technique, an adaptive state feedback controller with multiple adaptive laws is systematically designed for the systems. Subsequently, a controller with only two adaptive laws is proposed to solve the problem of over parameterization. Under the designed controllers, all the signals in the closed-loop system are bounded-input bounded-output stable in probability, and the system output can almost surely track the target trajectory within a specified bounded error. Finally, simulation results are presented to show the effectiveness of the proposed approaches.
New control concepts for uncertain water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming
1993-06-01
A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.
NASA Astrophysics Data System (ADS)
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-01
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Kaiyu; Yan, Da; Hong, Tianzhen
2014-02-28
Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an officemore » building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.« less
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Bauermeister, Christoph; Schwalger, Tilo; Russell, David F; Neiman, Alexander B; Lindner, Benjamin
2013-01-01
Stochastic signals with pronounced oscillatory components are frequently encountered in neural systems. Input currents to a neuron in the form of stochastic oscillations could be of exogenous origin, e.g. sensory input or synaptic input from a network rhythm. They shape spike firing statistics in a characteristic way, which we explore theoretically in this report. We consider a perfect integrate-and-fire neuron that is stimulated by a constant base current (to drive regular spontaneous firing), along with Gaussian narrow-band noise (a simple example of stochastic oscillations), and a broadband noise. We derive expressions for the nth-order interval distribution, its variance, and the serial correlation coefficients of the interspike intervals (ISIs) and confirm these analytical results by computer simulations. The theory is then applied to experimental data from electroreceptors of paddlefish, which have two distinct types of internal noisy oscillators, one forcing the other. The theory provides an analytical description of their afferent spiking statistics during spontaneous firing, and replicates a pronounced dependence of ISI serial correlation coefficients on the relative frequency of the driving oscillations, and furthermore allows extraction of certain parameters of the intrinsic oscillators embedded in these electroreceptors.
Combining Deterministic structures and stochastic heterogeneity for transport modeling
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg
2017-04-01
Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
Coupling induced logical stochastic resonance
NASA Astrophysics Data System (ADS)
Aravind, Manaoj; Murali, K.; Sinha, Sudeshna
2018-06-01
In this work we will demonstrate the following result: when we have two coupled bistable sub-systems, each driven separately by an external logic input signal, the coupled system yields outputs that can be mapped to specific logic gate operations in a robust manner, in an optimal window of noise. So, though the individual systems receive only one logic input each, due to the interplay of coupling, nonlinearity and noise, they cooperatively respond to give a logic output that is a function of both inputs. Thus the emergent collective response of the system, due to the inherent coupling, in the presence of a noise floor, maps consistently to that of logic outputs of the two inputs, a phenomenon we term coupling induced Logical Stochastic Resonance. Lastly, we demonstrate our idea in proof of principle circuit experiments.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Global dynamics of a stochastic neuronal oscillator
NASA Astrophysics Data System (ADS)
Yamanobe, Takanobu
2013-11-01
Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.
Global dynamics of a stochastic neuronal oscillator.
Yamanobe, Takanobu
2013-11-01
Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
Wang, Yong; Zhang, Guang J.
2016-09-29
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yong; Zhang, Guang J.
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
2015-03-01
vulnerable people will have access to this airdropped consumable aid (since nobody 1 is necessarily coordinating the distribution on the ground... VBA ) platforms (see Appendix B). In particular, we used GAMS v.23.9.3 with IBM ILOG CPLEX 12.4.0.1 to solve the stochastic, mixed-integer weighted...goal programming model, and we used Excel/ VBA to create an auto- matic, user-friendly interface with the decision maker for model input and analysis of
Iraeus, Johan; Lindquist, Mats
2016-10-01
Frontal crashes still account for approximately half of all fatalities in passenger cars, despite several decades of crash-related research. For serious injuries in this crash mode, several authors have listed the thorax as the most important. Computer simulation provides an effective tool to study crashes and evaluate injury mechanisms, and using stochastic input data, whole populations of crashes can be studied. The aim of this study was to develop a generic buck model and to validate this model on a population of real-life frontal crashes in terms of the risk of rib fracture. The study was conducted in four phases. In the first phase, real-life validation data were derived by analyzing NASS/CDS data to find the relationship between injury risk and crash parameters. In addition, available statistical distributions for the parameters were collected. In the second phase, a generic parameterized finite element (FE) model of a vehicle interior was developed based on laser scans from the A2MAC1 database. In the third phase, model parameters that could not be found in the literature were estimated using reverse engineering based on NCAP tests. Finally, in the fourth phase, the stochastic FE model was used to simulate a population of real-life crashes, and the result was compared to the validation data from phase one. The stochastic FE simulation model overestimates the risk of rib fracture, more for young occupants and less for senior occupants. However, if the effect of underestimation of rib fractures in the NASS/CDS material is accounted for using statistical simulations, the risk of rib fracture based on the stochastic FE model matches the risk based on the NASS/CDS data for senior occupants. The current version of the stochastic model can be used to evaluate new safety measures using a population of frontal crashes for senior occupants. Copyright © 2016 Elsevier Ltd. All rights reserved.
InterSpread Plus: a spatial and stochastic simulation model of disease in animal populations.
Stevenson, M A; Sanson, R L; Stern, M W; O'Leary, B D; Sujau, M; Moles-Benfell, N; Morris, R S
2013-04-01
We describe the spatially explicit, stochastic simulation model of disease spread, InterSpread Plus, in terms of its epidemiological framework, operation, and mode of use. The input data required by the model, the method for simulating contact and infection spread, and methods for simulating disease control measures are described. Data and parameters that are essential for disease simulation modelling using InterSpread Plus are distinguished from those that are non-essential, and it is suggested that a rational approach to simulating disease epidemics using this tool is to start with core data and parameters, adding additional layers of complexity if and when the specific requirements of the simulation exercise require it. We recommend that simulation models of disease are best developed as part of epidemic contingency planning so decision makers are familiar with model outputs and assumptions and are well-positioned to evaluate their strengths and weaknesses to make informed decisions in times of crisis. Copyright © 2012 Elsevier B.V. All rights reserved.
Noise-enhanced coding in phasic neuron spike trains.
Ly, Cheng; Doiron, Brent
2017-01-01
The stochastic nature of neuronal response has lead to conjectures about the impact of input fluctuations on the neural coding. For the most part, low pass membrane integration and spike threshold dynamics have been the primary features assumed in the transfer from synaptic input to output spiking. Phasic neurons are a common, but understudied, neuron class that are characterized by a subthreshold negative feedback that suppresses spike train responses to low frequency signals. Past work has shown that when a low frequency signal is accompanied by moderate intensity broadband noise, phasic neurons spike trains are well locked to the signal. We extend these results with a simple, reduced model of phasic activity that demonstrates that a non-Markovian spike train structure caused by the negative feedback produces a noise-enhanced coding. Further, this enhancement is sensitive to the timescales, as opposed to the intensity, of a driving signal. Reduced hazard function models show that noise-enhanced phasic codes are both novel and separate from classical stochastic resonance reported in non-phasic neurons. The general features of our theory suggest that noise-enhanced codes in excitable systems with subthreshold negative feedback are a particularly rich framework to study.
Identification of gene regulation models from single-cell data
NASA Astrophysics Data System (ADS)
Weber, Lisa; Raymond, William; Munsky, Brian
2018-09-01
In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.
Jump rates for surface diffusion of large molecules from first principles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shea, Patrick, E-mail: patrick.shea@dal.ca; Kreuzer, Hans Jürgen
2015-04-21
We apply a recently developed stochastic model for the surface diffusion of large molecules to calculate jump rates for 9,10-dithioanthracene on a Cu(111) surface. The necessary input parameters for the stochastic model are calculated from first principles using density functional theory (DFT). We find that the inclusion of van der Waals corrections to the DFT energies is critical to obtain good agreement with experimental results for the adsorption geometry and energy barrier for diffusion. The predictions for jump rates in our model are in excellent agreement with measured values and show a marked improvement over transition state theory (TST). Wemore » find that the jump rate prefactor is reduced by an order of magnitude from the TST estimate due to frictional damping resulting from energy exchange with surface phonons, as well as a rotational mode of the diffusing molecule.« less
Theory of Arachnid Prey Localization
NASA Astrophysics Data System (ADS)
Stürzl, W.; Kempter, R.; van Hemmen, J. L.
2000-06-01
Sand scorpions and many other arachnids locate their prey through highly sensitive slit sensilla at the tips (tarsi) of their eight legs. This sensor array responds to vibrations with stimulus-locked action potentials encoding the target direction. We present a neuronal model to account for stimulus angle determination using a population of second-order neurons, each receiving excitatory input from one tarsus and inhibition from a triad opposite to it. The input opens a time window whose width determines a neuron's firing probability. Stochastic optimization is realized through tuning the balance between excitation and inhibition. The agreement with experiments on the sand scorpion is excellent.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Baudracco, J; Lopez-Villalobos, N; Holmes, C W; Comeron, E A; Macdonald, K A; Barry, T N
2013-05-01
A whole-farm, stochastic and dynamic simulation model was developed to predict biophysical and economic performance of grazing dairy systems. Several whole-farm models simulate grazing dairy systems, but most of them work at a herd level. This model, named e-Dairy, differs from the few models that work at an animal level, because it allows stochastic behaviour of the genetic merit of individual cows for several traits, namely, yields of milk, fat and protein, live weight (LW) and body condition score (BCS) within a whole-farm model. This model accounts for genetic differences between cows, is sensitive to genotype × environment interactions at an animal level and allows pasture growth, milk and supplements price to behave stochastically. The model includes an energy-based animal module that predicts intake at grazing, mammary gland functioning and body lipid change. This whole-farm model simulates a 365-day period for individual cows within a herd, with cow parameters randomly generated on the basis of the mean parameter values, defined as input and variance and co-variances from experimental data sets. The main inputs of e-Dairy are farm area, use of land, type of pasture, type of crops, monthly pasture growth rate, supplements offered, nutritional quality of feeds, herd description including herd size, age structure, calving pattern, BCS and LW at calving, probabilities of pregnancy, average genetic merit and economic values for items of income and costs. The model allows to set management policies to define: dry-off cows (ceasing of lactation), target pre- and post-grazing herbage mass and feed supplementation. The main outputs are herbage dry matter intake, annual pasture utilisation, milk yield, changes in BCS and LW, economic farm profit and return on assets. The model showed satisfactory accuracy of prediction when validated against two data sets from farmlet system experiments. Relative prediction errors were <10% for all variables, and concordance correlation coefficients over 0.80 for annual pasture utilisation, yields of milk and milk solids (MS; fat plus protein), and of 0.69 and 0.48 for LW and BCS, respectively. A simulation of two contrasting dairy systems is presented to show the practical use of the model. The model can be used to explore the effects of feeding level and genetic merit and their interactions for grazing dairy systems, evaluating the trade-offs between profit and the associated risk.
Allore, H G; Schruben, L W; Erb, H N; Oltenacu, P A
1998-03-01
A dynamic stochastic simulation model for discrete events, SIMMAST, was developed to simulate the effect of mastitis on the composition of the bulk tank milk of dairy herds. Intramammary infections caused by Streptococcus agalactiae, Streptococcus spp. other than Strep. agalactiae, Staphylococcus aureus, and coagulase-negative staphylococci were modeled as were the milk, fat, and protein test day solutions for individual cows, which accounted for the fixed effects of days in milk, age at calving, season of calving, somatic cell count (SCC), and random effects of test day, cow yield differences from herdmates, and autocorrelated errors. Probabilities for the transitions among various states of udder health (uninfected or subclinically or clinically infected) were calculated to account for exposure, heifer infection, spontaneous recovery, lactation cure, infection or cure during the dry period, month of lactation, parity, within-herd yields, and the number of quarters with clinical intramammary infection in the previous and current lactations. The stochastic simulation model was constructed using estimates from the literature and also using data from 164 herds enrolled with Quality Milk Promotion Services that each had bulk tank SCC between 500,000 and 750,000/ml. Model parameters and outputs were validated against a separate data file of 69 herds from the Northeast Dairy Herd Improvement Association, each with a bulk tank SCC that was > or = 500,000/ml. Sensitivity analysis was performed on all input parameters for control herds. Using the validated stochastic simulation model, the control herds had a stable time average bulk tank SCC between 500,000 and 750,000/ml.
Robustness analysis of an air heating plant and control law by using polynomial chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colón, Diego; Ferreira, Murillo A. S.; Bueno, Átila M.
2014-12-10
This paper presents a robustness analysis of an air heating plant with a multivariable closed-loop control law by using the polynomial chaos methodology (MPC). The plant consists of a PVC tube with a fan in the air input (that forces the air through the tube) and a mass flux sensor in the output. A heating resistance warms the air as it flows inside the tube, and a thermo-couple sensor measures the air temperature. The plant has thus two inputs (the fan's rotation intensity and heat generated by the resistance, both measured in percent of the maximum value) and two outputsmore » (air temperature and air mass flux, also in percent of the maximal value). The mathematical model is obtained by System Identification techniques. The mass flux sensor, which is nonlinear, is linearized and the delays in the transfer functions are properly approximated by non-minimum phase transfer functions. The resulting model is transformed to a state-space model, which is used for control design purposes. The multivariable robust control design techniques used is the LQG/LTR, and the controllers are validated in simulation software and in the real plant. Finally, the MPC is applied by considering some of the system's parameters as random variables (one at a time, and the system's stochastic differential equations are solved by expanding the solution (a stochastic process) in an orthogonal basis of polynomial functions of the basic random variables. This method transforms the stochastic equations in a set of deterministic differential equations, which can be solved by traditional numerical methods (That is the MPC). Statistical data for the system (like expected values and variances) are then calculated. The effects of randomness in the parameters are evaluated in the open-loop and closed-loop pole's positions.« less
Indirect Identification of Linear Stochastic Systems with Known Feedback Dynamics
NASA Technical Reports Server (NTRS)
Huang, Jen-Kuang; Hsiao, Min-Hung; Cox, David E.
1996-01-01
An algorithm is presented for identifying a state-space model of linear stochastic systems operating under known feedback controller. In this algorithm, only the reference input and output of closed-loop data are required. No feedback signal needs to be recorded. The overall closed-loop system dynamics is first identified. Then a recursive formulation is derived to compute the open-loop plant dynamics from the identified closed-loop system dynamics and known feedback controller dynamics. The controller can be a dynamic or constant-gain full-state feedback controller. Numerical simulations and test data of a highly unstable large-gap magnetic suspension system are presented to demonstrate the feasibility of this indirect identification method.
Intrinsic Information Processing and Energy Dissipation in Stochastic Input-Output Dynamical Systems
2015-07-09
Crutchfield. Information Anatomy of Stochastic Equilibria, Entropy , (08 2014): 0. doi: 10.3390/e16094713 Virgil Griffith, Edwin Chong, Ryan James...Christopher Ellison, James Crutchfield. Intersection Information Based on Common Randomness, Entropy , (04 2014): 0. doi: 10.3390/e16041985 TOTAL: 5 Number...Learning Group Seminar, Complexity Sciences Center, UC Davis. Korana Burke and Greg Wimsatt (UCD), reviewed PRL “Measurement of Stochastic Entropy
Sustainability of transport structures - some aspects of the nonlinear reliability assessment
NASA Astrophysics Data System (ADS)
Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír
2017-09-01
Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.
Stochastic modelling of the hydrologic operation of rainwater harvesting systems
NASA Astrophysics Data System (ADS)
Guo, Rui; Guo, Yiping
2018-07-01
Rainwater harvesting (RWH) systems are an effective low impact development practice that provides both water supply and runoff reduction benefits. A stochastic modelling approach is proposed in this paper to quantify the water supply reliability and stormwater capture efficiency of RWH systems. The input rainfall series is represented as a marked Poisson process and two typical water use patterns are analytically described. The stochastic mass balance equation is solved analytically, and based on this, explicit expressions relating system performance to system characteristics are derived. The performances of a wide variety of RWH systems located in five representative climatic regions of the United States are examined using the newly derived analytical equations. Close agreements between analytical and continuous simulation results are shown for all the compared cases. In addition, an analytical equation is obtained expressing the required storage size as a function of the desired water supply reliability, average water use rate, as well as rainfall and catchment characteristics. The equations developed herein constitute a convenient and effective tool for sizing RWH systems and evaluating their performances.
The role of predictive uncertainty in the operational management of reservoirs
NASA Astrophysics Data System (ADS)
Todini, E.
2014-09-01
The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.
Li, Yongming; Tong, Shaocheng
2017-12-01
In this paper, an adaptive fuzzy output constrained control design approach is addressed for multi-input multioutput uncertain stochastic nonlinear systems in nonstrict-feedback form. The nonlinear systems addressed in this paper possess unstructured uncertainties, unknown gain functions and unknown stochastic disturbances. Fuzzy logic systems are utilized to tackle the problem of unknown nonlinear uncertainties. The barrier Lyapunov function technique is employed to solve the output constrained problem. In the framework of backstepping design, an adaptive fuzzy control design scheme is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured the mean bacteria transport behavior and calculated an envelope of uncertainty that bracketed the observations in most simulation cases. ?? 2007 American Chemical Society.
NASA Astrophysics Data System (ADS)
Papoulakos, Konstantinos; Pollakis, Giorgos; Moustakis, Yiannis; Markopoulos, Apostolis; Iliopoulou, Theano; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2017-04-01
Small islands are regarded as promising areas for developing hybrid water-energy systems that combine multiple sources of renewable energy with pumped-storage facilities. Essential element of such systems is the water storage component (reservoir), which implements both flow and energy regulations. Apparently, the representation of the overall water-energy management problem requires the simulation of the operation of the reservoir system, which in turn requires a faithful estimation of water inflows and demands of water and energy. Yet, in small-scale reservoir systems, this task in far from straightforward, since both the availability and accuracy of associated information is generally very poor. For, in contrast to large-scale reservoir systems, for which it is quite easy to find systematic and reliable hydrological data, in the case of small systems such data may be minor or even totally missing. The stochastic approach is the unique means to account for input data uncertainties within the combined water-energy management problem. Using as example the Livadi reservoir, which is the pumped storage component of the small Aegean island of Astypalaia, Greece, we provide a simulation framework, comprising: (a) a stochastic model for generating synthetic rainfall and temperature time series; (b) a stochastic rainfall-runoff model, whose parameters cannot be inferred through calibration and, thus, they are represented as correlated random variables; (c) a stochastic model for estimating water supply and irrigation demands, based on simulated temperature and soil moisture, and (d) a daily operation model of the reservoir system, providing stochastic forecasts of water and energy outflows. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Probabilistic switching circuits in DNA
Wilhelm, Daniel; Bruck, Jehoshua
2018-01-01
A natural feature of molecular systems is their inherent stochastic behavior. A fundamental challenge related to the programming of molecular information processing systems is to develop a circuit architecture that controls the stochastic states of individual molecular events. Here we present a systematic implementation of probabilistic switching circuits, using DNA strand displacement reactions. Exploiting the intrinsic stochasticity of molecular interactions, we developed a simple, unbiased DNA switch: An input signal strand binds to the switch and releases an output signal strand with probability one-half. Using this unbiased switch as a molecular building block, we designed DNA circuits that convert an input signal to an output signal with any desired probability. Further, this probability can be switched between 2n different values by simply varying the presence or absence of n distinct DNA molecules. We demonstrated several DNA circuits that have multiple layers and feedback, including a circuit that converts an input strand to an output strand with eight different probabilities, controlled by the combination of three DNA molecules. These circuits combine the advantages of digital and analog computation: They allow a small number of distinct input molecules to control a diverse signal range of output molecules, while keeping the inputs robust to noise and the outputs at precise values. Moreover, arbitrarily complex circuit behaviors can be implemented with just a single type of molecular building block. PMID:29339484
Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram
2017-04-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.
Stochastic Modeling of Radioactive Material Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason; Pope, Chad
2015-09-01
Nonreactor nuclear facilities operated under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA was developed using the MATLAB coding framework. The software application has a graphical user input. SODA can be installed on both Windows and Mac computers and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC, rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The work was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
Noise facilitates transcriptional control under dynamic inputs.
Kellogg, Ryan A; Tay, Savaş
2015-01-29
Cells must respond sensitively to time-varying inputs in complex signaling environments. To understand how signaling networks process dynamic inputs into gene expression outputs and the role of noise in cellular information processing, we studied the immune pathway NF-κB under periodic cytokine inputs using microfluidic single-cell measurements and stochastic modeling. We find that NF-κB dynamics in fibroblasts synchronize with oscillating TNF signal and become entrained, leading to significantly increased NF-κB oscillation amplitude and mRNA output compared to non-entrained response. Simulations show that intrinsic biochemical noise in individual cells improves NF-κB oscillation and entrainment, whereas cell-to-cell variability in NF-κB natural frequency creates population robustness, together enabling entrainment over a wider range of dynamic inputs. This wide range is confirmed by experiments where entrained cells were measured under all input periods. These results indicate that synergy between oscillation and noise allows cells to achieve efficient gene expression in dynamically changing signaling environments. Copyright © 2015 Elsevier Inc. All rights reserved.
Stochastic information transfer from cochlear implant electrodes to auditory nerve fibers
NASA Astrophysics Data System (ADS)
Gao, Xiao; Grayden, David B.; McDonnell, Mark D.
2014-08-01
Cochlear implants, also called bionic ears, are implanted neural prostheses that can restore lost human hearing function by direct electrical stimulation of auditory nerve fibers. Previously, an information-theoretic framework for numerically estimating the optimal number of electrodes in cochlear implants has been devised. This approach relies on a model of stochastic action potential generation and a discrete memoryless channel model of the interface between the array of electrodes and the auditory nerve fibers. Using these models, the stochastic information transfer from cochlear implant electrodes to auditory nerve fibers is estimated from the mutual information between channel inputs (the locations of electrodes) and channel outputs (the set of electrode-activated nerve fibers). Here we describe a revised model of the channel output in the framework that avoids the side effects caused by an "ambiguity state" in the original model and also makes fewer assumptions about perceptual processing in the brain. A detailed comparison of how different assumptions on fibers and current spread modes impact on the information transfer in the original model and in the revised model is presented. We also mathematically derive an upper bound on the mutual information in the revised model, which becomes tighter as the number of electrodes increases. We found that the revised model leads to a significantly larger maximum mutual information and corresponding number of electrodes compared with the original model and conclude that the assumptions made in this part of the modeling framework are crucial to the model's overall utility.
Economic Risk of Bee Pollination in Maine Wild Blueberry, Vaccinium angustifolium.
Asare, Eric; Hoshide, Aaron K; Drummond, Francis A; Criner, George K; Chen, Xuan
2017-10-01
Recent pollinator declines highlight the importance of evaluating economic risk of agricultural systems heavily dependent on rented honey bees or native pollinators. Our study analyzed variability of native bees and honey bees, and the risks these pose to profitability of Maine's wild blueberry industry. We used cross-sectional data from organic, low-, medium-, and high-input wild blueberry producers in 1993, 1997-1998, 2005-2007, and from 2011 to 2015 (n = 162 fields). Data included native and honey bee densities (count/m2/min) and honey bee stocking densities (hives/ha). Blueberry fruit set, yield, and honey bee hive stocking density models were estimated. Fruit set is impacted about 1.6 times more by native bees than honey bees on a per bee basis. Fruit set significantly explained blueberry yield. Honey bee stocking density in fields predicted honey bee foraging densities. These three models were used in enterprise budgets for all four systems from on-farm surveys of 23 conventional and 12 organic producers (2012-2013). These budgets formed the basis of Monte Carlo simulations of production and profit. Stochastic dominance of net farm income (NFI) cumulative distribution functions revealed that if organic yields are high enough (2,345 kg/ha), organic systems are economically preferable to conventional systems. However, if organic yields are lower (724 kg/ha), it is riskier with higher variability of crop yield and NFI. Although medium-input systems are stochastically dominant with lower NFI variability compared with other conventional systems, the high-input system breaks even with the low-input system if honey bee hive rental prices triple in the future. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America.
Wang, Huanqing; Chen, Bing; Liu, Xiaoping; Liu, Kefu; Lin, Chong
2013-12-01
This paper is concerned with the problem of adaptive fuzzy tracking control for a class of pure-feedback stochastic nonlinear systems with input saturation. To overcome the design difficulty from nondifferential saturation nonlinearity, a smooth nonlinear function of the control input signal is first introduced to approximate the saturation function; then, an adaptive fuzzy tracking controller based on the mean-value theorem is constructed by using backstepping technique. The proposed adaptive fuzzy controller guarantees that all signals in the closed-loop system are bounded in probability and the system output eventually converges to a small neighborhood of the desired reference signal in the sense of mean quartic value. Simulation results further illustrate the effectiveness of the proposed control scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nour, Ali, E-mail: ali.nour@polymtl.ca; Hydro Quebec, Montreal, Quebec, H2L 4P5; Massicotte, Bruno
This study is aimed at proposing a simple analytical model to investigate the post-cracking behaviour of FRC panels, using an arbitrary tension softening, stress crack opening diagram, as the input. A new relationship that links the crack opening to the panel deflection is proposed. Due to the stochastic nature of material properties, the random fibre distribution, and other uncertainties that are involved in concrete mix, this relationship is developed from the analysis of beams having the same thickness using the Monte Carlo simulation (MCS) technique. The softening diagrams obtained from direct tensile tests are used as the input for themore » calculation, in a deterministic way, of the mean load displacement response of round panels. A good agreement is found between the model predictions and the experimental results.« less
CARE 3 user-friendly interface user's guide
NASA Technical Reports Server (NTRS)
Martensen, A. L.
1987-01-01
CARE 3 predicts the unreliability of highly reliable reconfigurable fault-tolerant systems that include redundant computers or computer systems. CARE3MENU is a user-friendly interface used to create an input for the CARE 3 program. The CARE3MENU interface has been designed to minimize user input errors. Although a CARE3MENU session may be successfully completed and all parameters may be within specified limits or ranges, the CARE 3 program is not guaranteed to produce meaningful results if the user incorrectly interprets the CARE 3 stochastic model. The CARE3MENU User Guide provides complete information on how to create a CARE 3 model with the interface. The CARE3MENU interface runs under the VAX/VMS operating system.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
Low-complexity stochastic modeling of wall-bounded shear flows
NASA Astrophysics Data System (ADS)
Zare, Armin
Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.
Chen, Zheng; Liu, Liu; Mu, Lin
2017-05-03
In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Lafferty, Kevin D.; Dunne, Jennifer A.
2010-01-01
Stochastic ecological network occupancy (SENO) models predict the probability that species will occur in a sample of an ecological network. In this review, we introduce SENO models as a means to fill a gap in the theoretical toolkit of ecologists. As input, SENO models use a topological interaction network and rates of colonization and extinction (including consumer effects) for each species. A SENO model then simulates the ecological network over time, resulting in a series of sub-networks that can be used to identify commonly encountered community modules. The proportion of time a species is present in a patch gives its expected probability of occurrence, whose sum across species gives expected species richness. To illustrate their utility, we provide simple examples of how SENO models can be used to investigate how topological complexity, species interactions, species traits, and spatial scale affect communities in space and time. They can categorize species as biodiversity facilitators, contributors, or inhibitors, making this approach promising for ecosystem-based management of invasive, threatened, or exploited species.
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Identification of the structure parameters using short-time non-stationary stochastic excitation
NASA Astrophysics Data System (ADS)
Jarczewska, Kamila; Koszela, Piotr; Śniady, PaweŁ; Korzec, Aleksandra
2011-07-01
In this paper, we propose an approach to the flexural stiffness or eigenvalue frequency identification of a linear structure using a non-stationary stochastic excitation process. The idea of the proposed approach lies within time domain input-output methods. The proposed method is based on transforming the dynamical problem into a static one by integrating the input and the output signals. The output signal is the structure reaction, i.e. structure displacements due to the short-time, irregular load of random type. The systems with single and multiple degrees of freedom, as well as continuous systems are considered.
NASA Astrophysics Data System (ADS)
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
NASA Astrophysics Data System (ADS)
Yu, Lianchun; Liu, Liwei
2014-03-01
The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.
Yu, Lianchun; Liu, Liwei
2014-03-01
The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.
A Model for Temperature Fluctuations in a Buoyant Plume
NASA Astrophysics Data System (ADS)
Bisignano, A.; Devenish, B. J.
2015-11-01
We present a hybrid Lagrangian stochastic model for buoyant plume rise from an isolated source that includes the effects of temperature fluctuations. The model is based on that of Webster and Thomson (Atmos Environ 36:5031-5042, 2002) in that it is a coupling of a classical plume model in a crossflow with stochastic differential equations for the vertical velocity and temperature (which are themselves coupled). The novelty lies in the addition of the latter stochastic differential equation. Parametrizations of the plume turbulence are presented that are used as inputs to the model. The root-mean-square temperature is assumed to be proportional to the difference between the centreline temperature of the plume and the ambient temperature. The constant of proportionality is tuned by comparison with equivalent statistics from large-eddy simulations (LES) of buoyant plumes in a uniform crossflow and linear stratification. We compare plume trajectories for a wide range of crossflow velocities and find that the model generally compares well with the equivalent LES results particularly when added mass is included in the model. The exception occurs when the crossflow velocity component becomes very small. Comparison of the scalar concentration, both in terms of the height of the maximum concentration and its vertical spread, shows similar behaviour. The model is extended to allow for realistic profiles of ambient wind and temperature and the results are compared with LES of the plume that emanated from the explosion and fire at the Buncefield oil depot in 2005.
Production and efficiency of large wildland fire suppression effort: A stochastic frontier analysis
Hari Katuwal; Dave Calkin; Michael S. Hand
2016-01-01
This study examines the production and efficiency of wildland fire suppression effort. We estimate the effectiveness of suppression resource inputs to produce controlled fire lines that contain large wildland fires using stochastic frontier analysis. Determinants of inefficiency are identified and the effects of these determinants on the daily production of...
Wang, Jianhui; Liu, Zhi; Chen, C L Philip; Zhang, Yun
2017-10-12
Hysteresis exists ubiquitously in physical actuators. Besides, actuator failures/faults may also occur in practice. Both effects would deteriorate the transient tracking performance, and even trigger instability. In this paper, we consider the problem of compensating for actuator failures and input hysteresis by proposing a fuzzy control scheme for stochastic nonlinear systems. Compared with the existing research on stochastic nonlinear uncertain systems, it is found that how to guarantee a prescribed transient tracking performance when taking into account actuator failures and hysteresis simultaneously also remains to be answered. Our proposed control scheme is designed on the basis of the fuzzy logic system and backstepping techniques for this purpose. It is proven that all the signals remain bounded and the tracking error is ensured to be within a preestablished bound with the failures of hysteretic actuator. Finally, simulations are provided to illustrate the effectiveness of the obtained theoretical results.
Gerstner, Wulfram
2017-01-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957
NASA Technical Reports Server (NTRS)
Mookerjee, P.; Molusis, J. A.; Bar-Shalom, Y.
1985-01-01
An investigation of the properties important for the design of stochastic adaptive controllers for the higher harmonic control of helicopter vibration is presented. Three different model types are considered for the transfer relationship between the helicopter higher harmonic control input and the vibration output: (1) nonlinear; (2) linear with slow time varying coefficients; and (3) linear with constant coefficients. The stochastic controller formulations and solutions are presented for a dual, cautious, and deterministic controller for both linear and nonlinear transfer models. Extensive simulations are performed with the various models and controllers. It is shown that the cautious adaptive controller can sometimes result in unacceptable vibration control. A new second order dual controller is developed which is shown to modify the cautious adaptive controller by adding numerator and denominator correction terms to the cautious control algorithm. The new dual controller is simulated on a simple single-control vibration example and is found to achieve excellent vibration reduction and significantly improves upon the cautious controller.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2016-01-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2015-07-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.
Kim, Kyung Hyuk; Sauro, Herbert M
2015-01-01
This chapter introduces a computational analysis method for analyzing gene circuit dynamics in terms of modules while taking into account stochasticity, system nonlinearity, and retroactivity. (1) ANALOG ELECTRICAL CIRCUIT REPRESENTATION FOR GENE CIRCUITS: A connection between two gene circuit components is often mediated by a transcription factor (TF) and the connection signal is described by the TF concentration. The TF is sequestered to its specific binding site (promoter region) and regulates downstream transcription. This sequestration has been known to affect the dynamics of the TF by increasing its response time. The downstream effect-retroactivity-has been shown to be explicitly described in an electrical circuit representation, as an input capacitance increase. We provide a brief review on this topic. (2) MODULAR DESCRIPTION OF NOISE PROPAGATION: Gene circuit signals are noisy due to the random nature of biological reactions. The noisy fluctuations in TF concentrations affect downstream regulation. Thus, noise can propagate throughout the connected system components. This can cause different circuit components to behave in a statistically dependent manner, hampering a modular analysis. Here, we show that the modular analysis is still possible at the linear noise approximation level. (3) NOISE EFFECT ON MODULE INPUT-OUTPUT RESPONSE: We investigate how to deal with a module input-output response and its noise dependency. Noise-induced phenotypes are described as an interplay between system nonlinearity and signal noise. Lastly, we provide the comprehensive approach incorporating the above three analysis methods, which we call "stochastic modular analysis." This method can provide an analysis framework for gene circuit dynamics when the nontrivial effects of retroactivity, stochasticity, and nonlinearity need to be taken into account.
Dynamical Characteristics Common to Neuronal Competition Models
Shpiro, Asya; Curtu, Rodica; Rinzel, John; Rubin, Nava
2009-01-01
Models implementing neuronal competition by reciprocally inhibitory populations are widely used to characterize bistable phenomena such as binocular rivalry. We find common dynamical behavior in several models of this general type, which differ in their architecture in the form of their gain functions, and in how they implement the slow process that underlies alternating dominance. We focus on examining the effect of the input strength on the rate (and existence) of oscillations. In spite of their differences, all considered models possess similar qualitative features, some of which we report here for the first time. Experimentally, dominance durations have been reported to decrease monotonically with increasing stimulus strength (such as Levelt's “Proposition IV”). The models predict this behavior; however, they also predict that at a lower range of input strength dominance durations increase with increasing stimulus strength. The nonmonotonic dependency of duration on stimulus strength is common to both deterministic and stochastic models. We conclude that additional experimental tests of Levelt's Proposition IV are needed to reconcile models and perception. PMID:17065254
Qorbani, Mostafa; Farzadfar, Farshad; Majdzadeh, Reza; Mohammad, Kazem; Motevalian, Abbas
2017-01-01
Our aim was to explore the technical efficiency (TE) of the Iranian rural primary healthcare (PHC) system for diabetes treatment coverage rate using the stochastic frontier analysis (SFA) as well as to examine the strength and significance of the effect of human resources density on diabetes treatment. In the SFA model diabetes treatment coverage rate, as a output, is a function of health system inputs (Behvarz worker density, physician density, and rural health center density) and non-health system inputs (urbanization rate, median age of population, and wealth index) as a set of covariates. Data about the rate of self-reported diabetes treatment coverage was obtained from the Non-Communicable Disease Surveillance Survey, data about health system inputs were collected from the health census database and data about non-health system inputs were collected from the census data and household survey. In 2008, rate of diabetes treatment coverage was 67% (95% CI: 63%-71%) nationally, and at the provincial level it varied from 44% to 81%. The TE score at the national level was 87.84%, with considerable variation across provinces (from 59.65% to 98.28%).Among health system and non-health system inputs, only the Behvarz density (per 1000 population)was significantly associated with diabetes treatment coverage (β (95%CI): 0.50 (0.29-0.70), p < 0.001). Our findings show that although the rural PHC system can considered efficient in diabetes treatment at the national level, a wide variation exists in TE at the provincial level. Because the only variable that is predictor of TE is the Behvarz density, the PHC system may extend the diabetes treatment coverage by using this group of health care workers.
Noise adaptation in integrate-and fire neurons.
Rudd, M E; Brown, L G
1997-07-01
The statistical spiking response of an ensemble of identically prepared stochastic integrate-and-fire neurons to a rectangular input current plus gaussian white noise is analyzed. It is shown that, on average, integrate-and-fire neurons adapt to the root-mean-square noise level of their input. This phenomenon is referred to as noise adaptation. Noise adaptation is characterized by a decrease in the average neural firing rate and an accompanying decrease in the average value of the generator potential, both of which can be attributed to noise-induced resets of the generator potential mediated by the integrate-and-fire mechanism. A quantitative theory of noise adaptation in stochastic integrate-and-fire neurons is developed. It is shown that integrate-and-fire neurons, on average, produce transient spiking activity whenever there is an increase in the level of their input noise. This transient noise response is either reduced or eliminated over time, depending on the parameters of the model neuron. Analytical methods are used to prove that nonleaky integrate-and-fire neurons totally adapt to any constant input noise level, in the sense that their asymptotic spiking rates are independent of the magnitude of their input noise. For leaky integrate-and-fire neurons, the long-run noise adaptation is not total, but the response to noise is partially eliminated. Expressions for the probability density function of the generator potential and the first two moments of the potential distribution are derived for the particular case of a nonleaky neuron driven by gaussian white noise of mean zero and constant variance. The functional significance of noise adaptation for the performance of networks comprising integrate-and-fire neurons is discussed.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; An, Hyunuk; Kim, Sanghyun
2015-04-01
Soil moisture, a critical factor in hydrologic systems, plays a key role in synthesizing interactions among soil, climate, hydrological response, solute transport and ecosystem dynamics. The spatial and temporal distribution of soil moisture at a hillslope scale is essential for understanding hillslope runoff generation processes. In this study, we implement Monte Carlo simulations in the hillslope scale using a three-dimensional surface-subsurface integrated model (3D model). Numerical simulations are compared with multiple soil moistures which had been measured using TDR(Mini_TRASE) for 22 locations in 2 or 3 depths during a whole year at a hillslope (area: 2100 square meters) located in Bongsunsa Watershed, South Korea. In stochastic simulations via Monte Carlo, uncertainty of the soil parameters and input forcing are considered and model ensembles showing good performance are selected separately for several seasonal periods. The presentation will be focused on the characterization of seasonal variations of model parameters based on simulations with field measurements. In addition, structural limitations of the contemporary modeling method will be discussed.
NASA Astrophysics Data System (ADS)
Yamakou, Marius E.; Jost, Jürgen
2017-10-01
In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.
Retkute, Renata; Townsend, Alexandra J; Murchie, Erik H; Jensen, Oliver E; Preston, Simon P
2018-05-25
Diurnal changes in solar position and intensity combined with the structural complexity of plant architecture result in highly variable and dynamic light patterns within the plant canopy. This affects productivity through the complex ways that photosynthesis responds to changes in light intensity. Current methods to characterize light dynamics, such as ray-tracing, are able to produce data with excellent spatio-temporal resolution but are computationally intensive and the resulting data are complex and high-dimensional. This necessitates development of more economical models for summarizing the data and for simulating realistic light patterns over the course of a day. High-resolution reconstructions of field-grown plants are assembled in various configurations to form canopies, and a forward ray-tracing algorithm is applied to the canopies to compute light dynamics at high (1 min) temporal resolution. From the ray-tracer output, the sunlit or shaded state for each patch on the plants is determined, and these data are used to develop a novel stochastic model for the sunlit-shaded patterns. The model is designed to be straightforward to fit to data using maximum likelihood estimation, and fast to simulate from. For a wide range of contrasting 3-D canopies, the stochastic model is able to summarize, and replicate in simulations, key features of the light dynamics. When light patterns simulated from the stochastic model are used as input to a model of photoinhibition, the predicted reduction in carbon gain is similar to that from calculations based on the (extremely costly) ray-tracer data. The model provides a way to summarize highly complex data in a small number of parameters, and a cost-effective way to simulate realistic light patterns. Simulations from the model will be particularly useful for feeding into larger-scale photosynthesis models for calculating how light dynamics affects the photosynthetic productivity of canopies.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Posadas-Domínguez, R R; Callejas-Juárez, N; Arriaga-Jordán, C M; Martínez-Castañeda, F E
2016-12-01
A simulation Monte Carlo model was used to assess the economic and financial viability of 130 small-scale dairy farms in central Mexico, through a Representative Small-Scale Dairy Farm. Net yields were calculated for a 9-year planning horizon by means of simulated values for the distribution of input and product prices taking 2010 as base year and considering four scenarios which were compared against the scenario of actual production. The other scenarios were (1) total hiring in of needed labour; (2) external purchase of 100 % of inputs and (3) withdrawal of subsidies to production. A stochastic modelling approach was followed to determine the scenario with the highest economic and financial viability. Results show a viable economic and financial situation for the real production scenario, as well as the scenarios for total hiring of labour and of withdrawal of subsidies, but the scenario when 100 % of feed inputs for the herd are bought-in was not viable.
The Effects of a Change in the Variability of Irrigation Water
NASA Astrophysics Data System (ADS)
Lyon, Kenneth S.
1983-10-01
This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."
Identification of stochastic interactions in nonlinear models of structural mechanics
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2017-07-01
In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.
A Markov model for the temporal dynamics of balanced random networks of finite size
Lagzi, Fereshteh; Rotter, Stefan
2014-01-01
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644
NASA Astrophysics Data System (ADS)
Murphy, B. P.; Czuba, J. A.; Belmont, P.; Budy, P.; Finch, C.
2017-12-01
Episodic events in steep landscapes, such as wildfire and mass wasting, contribute large pulses of sediment to rivers and can significantly alter the quality and connectivity of fish habitat. Understanding where these sediment inputs occur, how they are transported and processed through the watershed, and their geomorphic effect on the river network is critical to predicting the impact on ecological aquatic communities. The Tushar Mountains of southern Utah experienced a severe wildfire in 2010, resulting in numerous debris flows and the extirpation of trout populations. Following many years of habitat and ecological monitoring in the field, we have developed a modeling framework that links post-wildfire debris flows, fluvial sediment routing, and population ecology in order to evaluate the impact and response of trout to wildfire. First, using the Tushar topographic and wildfire parameters, as well as stochastic precipitation generation, we predict the post-wildfire debris flow probabilities and volumes of mainstem tributaries using the Cannon et al. [2010] model. This produces episodic hillslope sediment inputs, which are delivered to a fluvial sediment, river-network routing model (modified from Czuba et al. [2017]). In this updated model, sediment transport dynamics are driven by time-varying discharge associated with the stochastic precipitation generation, include multiple grain sizes (including gravel), use mixed-size transport equations (Wilcock & Crowe [2003]), and incorporate channel slope adjustments with aggradation and degradation. Finally, with the spatially explicit adjustments in channel bed elevation and grain size, we utilize a new population viability analysis (PVA) model to predict the impact and recovery of fish populations in response to these changes in habitat. Our model provides a generalizable framework for linking physical and ecological models and for evaluating the extirpation risk of isolated fish populations throughout the Intermountain West to the increasing threat of wildfire.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes andmore » fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.« less
Weighted Watson-Crick automata
NASA Astrophysics Data System (ADS)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-01
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
NASA Astrophysics Data System (ADS)
Baumann, Erwin W.; Williams, David L.
1993-08-01
Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
Haddad, Tarek; Himes, Adam; Thompson, Laura; Irony, Telba; Nair, Rajesh
2017-01-01
Evaluation of medical devices via clinical trial is often a necessary step in the process of bringing a new product to market. In recent years, device manufacturers are increasingly using stochastic engineering models during the product development process. These models have the capability to simulate virtual patient outcomes. This article presents a novel method based on the power prior for augmenting a clinical trial using virtual patient data. To properly inform clinical evaluation, the virtual patient model must simulate the clinical outcome of interest, incorporating patient variability, as well as the uncertainty in the engineering model and in its input parameters. The number of virtual patients is controlled by a discount function which uses the similarity between modeled and observed data. This method is illustrated by a case study of cardiac lead fracture. Different discount functions are used to cover a wide range of scenarios in which the type I error rates and power vary for the same number of enrolled patients. Incorporation of engineering models as prior knowledge in a Bayesian clinical trial design can provide benefits of decreased sample size and trial length while still controlling type I error rate and power.
Koracin, Darko; Vellore, Ramesh; Lowenthal, Douglas H; Watson, John G; Koracin, Julide; McCord, Travis; DuBois, David W; Chen, L W Antony; Kumar, Naresh; Knipping, Eladio M; Wheeler, Neil J M; Craig, Kenneth; Reid, Stephen
2011-06-01
The main objective of this study was to investigate the capabilities of the receptor-oriented inverse mode Lagrangian Stochastic Particle Dispersion Model (LSPDM) with the 12-km resolution Mesoscale Model 5 (MM5) wind field input for the assessment of source identification from seven regions impacting two receptors located in the eastern United States. The LSPDM analysis was compared with a standard version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) single-particle backward-trajectory analysis using inputs from MM5 and the Eta Data Assimilation System (EDAS) with horizontal grid resolutions of 12 and 80 km, respectively. The analysis included four 7-day summertime events in 2002; residence times in the modeling domain were computed from the inverse LSPDM runs and HYPSLIT-simulated backward trajectories started from receptor-source heights of 100, 500, 1000, 1500, and 3000 m. Statistics were derived using normalized values of LSPDM- and HYSPLIT-predicted residence times versus Community Multiscale Air Quality model-predicted sulfate concentrations used as baseline information. From 40 cases considered, the LSPDM identified first- and second-ranked emission region influences in 37 cases, whereas HYSPLIT-MM5 (HYSPLIT-EDAS) identified the sources in 21 (16) cases. The LSPDM produced a higher overall correlation coefficient (0.89) compared with HYSPLIT (0.55-0.62). The improvement of using the LSPDM is also seen in the overall normalized root mean square error values of 0.17 for LSPDM compared with 0.30-0.32 for HYSPLIT. The HYSPLIT backward trajectories generally tend to underestimate near-receptor sources because of a lack of stochastic dispersion of the backward trajectories and to overestimate distant sources because of a lack of treatment of dispersion. Additionally, the HYSPLIT backward trajectories showed a lack of consistency in the results obtained from different single vertical levels for starting the backward trajectories. To alleviate problems due to selection of a backward-trajectory starting level within a large complex set of 3-dimensional winds, turbulence, and dispersion, results were averaged from all heights, which yielded uniform improvement against all individual cases.
MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE
Cao, Ye; Frey, H. Christopher
2012-01-01
Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732
Stochastic Modeling of the Environmental Impacts of the Mingtang Tunneling Project
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Chen, Ziyang; Tan, Benjamin Zhi Wen; Sege, Jon; Wang, Changhong; Rubin, Yoram
2017-04-01
This paper investigates the environmental impacts of a major tunneling project in China. Of particular interest is the drawdown of the water table, due to its potential impacts on ecosystem health and on agricultural activity. Due to scarcity of data, the study pursues a Bayesian stochastic approach, which is built around a numerical model. We adopted the Bayesian approach with the goal of deriving the posterior distributions of the dependent variables conditional on local data. The choice of the Bayesian approach for this study is somewhat non-trivial because of the scarcity of in-situ measurements. The thought guiding this selection is that prior distributions for the model input variables are valuable tools even if that all inputs are available, the Bayesian approach could provide a good starting point for further updates as and if additional data becomes available. To construct effective priors, a systematic approach was developed and implemented for constructing informative priors based on other, well-documented sites which bear geological and hydrological similarity to the target site (the Mingtang tunneling project). The approach is built around two classes of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. The prior construction strategy was implemented for the hydraulic conductivity of various types of rocks at the site (Granite and Gneiss) and for modeling the geometry and conductivity of the fault zones. Additional elements of our strategy include (1) modeling the water table through bounding surfaces representing upper and lower limits, and (2) modeling the effective conductivity as a random variable (varying between realizations, not in space). The approach was tested successfully against its ability to predict the tunnel infiltration fluxes and against observations of drying soils.
Dervaux, B; Leleu, H; Valdmanis, V; Walker, D
2003-12-01
An aim of vaccination programs is near-complete coverage. One method for achieving this is for health facilities providing these services to operate frequently and for many hours during each session. However, if vaccine vials are not fully used, the remainder is often discarded, considered as waste. Without an active appointment schedule process, there is no way for facility staff to control the stochastic demand of potential patients, and hence reduce waste. And yet reducing the hours of operation or number of sessions per week could hinder access to vaccination services. In lieu of any formal system of controlling demand, we propose to model the optimal number of hours and sessions in order to maximize outputs, the number and type of vaccines provided given inputs, using Data Envelopment Analysis (DEA). Inputs are defined as the amount of vaccine wastage and the number of full-time equivalent staff, size of the facility, number of hours of operation and the number of sessions. Outputs are defined as the number and type of vaccines aimed at children and pregnant women. This analysis requires two models: one DEA model with possible reallocations between the number of hours and the number of sessions but with the total amount of time fixed and one model without this kind of reallocation in scheduling. Comparing these two scores we can identify the "gain" that would be possible were the scheduling of hours and sessions modified while controlling for all other types of inefficiency. By modeling an output-based model, we maintain the objective of increasing coverage while assisting decision-makers determining optimal operating processes.
NASA Astrophysics Data System (ADS)
Algrain, Marcelo C.; Powers, Richard M.
1997-05-01
A case study, written in a tutorial manner, is presented where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. Models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). The predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are the desired attitude angles and rate set points. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade- off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
Lankheet, Martin J. M.; Klink, P. Christiaan; Borghuis, Bart G.; Noest, André J.
2012-01-01
Catfish detect and identify invisible prey by sensing their ultra-weak electric fields with electroreceptors. Any neuron that deals with small-amplitude input has to overcome sensitivity limitations arising from inherent threshold non-linearities in spike-generation mechanisms. Many sensory cells solve this issue with stochastic resonance, in which a moderate amount of intrinsic noise causes irregular spontaneous spiking activity with a probability that is modulated by the input signal. Here we show that catfish electroreceptors have adopted a fundamentally different strategy. Using a reverse correlation technique in which we take spike interval durations into account, we show that the electroreceptors generate a supra-threshold bias current that results in quasi-periodically produced spikes. In this regime stimuli modulate the interval between successive spikes rather than the instantaneous probability for a spike. This alternative for stochastic resonance combines threshold-free sensitivity for weak stimuli with similar sensitivity for excitations and inhibitions based on single interspike intervals. PMID:22403709
An alternate protocol to achieve stochastic and deterministic resonances
NASA Astrophysics Data System (ADS)
Tiwari, Ishant; Dave, Darshil; Phogat, Richa; Khera, Neev; Parmananda, P.
2017-10-01
Periodic and Aperiodic Stochastic Resonance (SR) and Deterministic Resonance (DR) are studied in this paper. To check for the ubiquitousness of the phenomena, two unrelated systems, namely, FitzHugh-Nagumo and a particle in a bistable potential well, are studied. Instead of the conventional scenario of noise amplitude (in the case of SR) or chaotic signal amplitude (in the case of DR) variation, a tunable system parameter ("a" in the case of FitzHugh-Nagumo model and the damping coefficient "j" in the bistable model) is regulated. The operating values of these parameters are defined as the "setpoint" of the system throughout the present work. Our results indicate that there exists an optimal value of the setpoint for which maximum information transfer between the input and the output signals takes place. This information transfer from the input sub-threshold signal to the output dynamics is quantified by the normalised cross-correlation coefficient ( | CCC | ). | CCC | as a function of the setpoint exhibits a unimodal variation which is characteristic of SR (or DR). Furthermore, | CCC | is computed for a grid of noise (or chaotic signal) amplitude and setpoint values. The heat map of | CCC | over this grid yields the presence of a resonance region in the noise-setpoint plane for which the maximum enhancement of the input sub-threshold signal is observed. This resonance region could be possibly used to explain how organisms maintain their signal detection efficacy with fluctuating amounts of noise present in their environment. Interestingly, the method of regulating the setpoint without changing the noise amplitude was not able to induce Coherence Resonance (CR). A possible, qualitative reasoning for this is provided.
Modeling heterogeneous responsiveness of intrinsic apoptosis pathway
2013-01-01
Background Apoptosis is a cell suicide mechanism that enables multicellular organisms to maintain homeostasis and to eliminate individual cells that threaten the organism’s survival. Dependent on the type of stimulus, apoptosis can be propagated by extrinsic pathway or intrinsic pathway. The comprehensive understanding of the molecular mechanism of apoptotic signaling allows for development of mathematical models, aiming to elucidate dynamical and systems properties of apoptotic signaling networks. There have been extensive efforts in modeling deterministic apoptosis network accounting for average behavior of a population of cells. Cellular networks, however, are inherently stochastic and significant cell-to-cell variability in apoptosis response has been observed at single cell level. Results To address the inevitable randomness in the intrinsic apoptosis mechanism, we develop a theoretical and computational modeling framework of intrinsic apoptosis pathway at single-cell level, accounting for both deterministic and stochastic behavior. Our deterministic model, adapted from the well-accepted Fussenegger model, shows that an additional positive feedback between the executioner caspase and the initiator caspase plays a fundamental role in yielding the desired property of bistability. We then examine the impact of intrinsic fluctuations of biochemical reactions, viewed as intrinsic noise, and natural variation of protein concentrations, viewed as extrinsic noise, on behavior of the intrinsic apoptosis network. Histograms of the steady-state output at varying input levels show that the intrinsic noise could elicit a wider region of bistability over that of the deterministic model. However, the system stochasticity due to intrinsic fluctuations, such as the noise of steady-state response and the randomness of response delay, shows that the intrinsic noise in general is insufficient to produce significant cell-to-cell variations at physiologically relevant level of molecular numbers. Furthermore, the extrinsic noise represented by random variations of two key apoptotic proteins, namely Cytochrome C and inhibitor of apoptosis proteins (IAP), is modeled separately or in combination with intrinsic noise. The resultant stochasticity in the timing of intrinsic apoptosis response shows that the fluctuating protein variations can induce cell-to-cell stochastic variability at a quantitative level agreeing with experiments. Finally, simulations illustrate that the mean abundance of fluctuating IAP protein is positively correlated with the degree of cellular stochasticity of the intrinsic apoptosis pathway. Conclusions Our theoretical and computational study shows that the pronounced non-genetic heterogeneity in intrinsic apoptosis responses among individual cells plausibly arises from extrinsic rather than intrinsic origin of fluctuations. In addition, it predicts that the IAP protein could serve as a potential therapeutic target for suppression of the cell-to-cell variation in the intrinsic apoptosis responsiveness. PMID:23875784
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
Holdo, Ricardo M
2013-01-01
The two-layer hypothesis of tree-grass coexistence posits that trees and grasses differ in rooting depth, with grasses exploiting soil moisture in shallow layers while trees have exclusive access to deep water. The lack of clear differences in maximum rooting depth between these two functional groups, however, has caused this model to fall out of favor. The alternative model, the demographic bottleneck hypothesis, suggests that trees and grasses occupy overlapping rooting niches, and that stochastic events such as fires and droughts result in episodic tree mortality at various life stages, thus preventing trees from otherwise displacing grasses, at least in mesic savannas. Two potential problems with this view are: 1) we lack data on functional rooting profiles in trees and grasses, and these profiles are not necessarily reflected by differences in maximum or physical rooting depth, and 2) subtle, difficult-to-detect differences in rooting profiles between the two functional groups may be sufficient to result in coexistence in many situations. To tackle this question, I coupled a plant uptake model with a soil moisture dynamics model to explore the environmental conditions under which functional rooting profiles with equal rooting depth but different depth distributions (i.e., shapes) can coexist when competing for water. I show that, as long as rainfall inputs are stochastic, coexistence based on rooting differences is viable under a wide range of conditions, even when these differences are subtle. The results also indicate that coexistence mechanisms based on rooting niche differentiation are more viable under some climatic and edaphic conditions than others. This suggests that the two-layer model is both viable and stochastic in nature, and that a full understanding of tree-grass coexistence and dynamics may require incorporating fine-scale rooting differences between these functional groups and realistic stochastic climate drivers into future models.
Holdo, Ricardo M.
2013-01-01
The two-layer hypothesis of tree-grass coexistence posits that trees and grasses differ in rooting depth, with grasses exploiting soil moisture in shallow layers while trees have exclusive access to deep water. The lack of clear differences in maximum rooting depth between these two functional groups, however, has caused this model to fall out of favor. The alternative model, the demographic bottleneck hypothesis, suggests that trees and grasses occupy overlapping rooting niches, and that stochastic events such as fires and droughts result in episodic tree mortality at various life stages, thus preventing trees from otherwise displacing grasses, at least in mesic savannas. Two potential problems with this view are: 1) we lack data on functional rooting profiles in trees and grasses, and these profiles are not necessarily reflected by differences in maximum or physical rooting depth, and 2) subtle, difficult-to-detect differences in rooting profiles between the two functional groups may be sufficient to result in coexistence in many situations. To tackle this question, I coupled a plant uptake model with a soil moisture dynamics model to explore the environmental conditions under which functional rooting profiles with equal rooting depth but different depth distributions (i.e., shapes) can coexist when competing for water. I show that, as long as rainfall inputs are stochastic, coexistence based on rooting differences is viable under a wide range of conditions, even when these differences are subtle. The results also indicate that coexistence mechanisms based on rooting niche differentiation are more viable under some climatic and edaphic conditions than others. This suggests that the two-layer model is both viable and stochastic in nature, and that a full understanding of tree-grass coexistence and dynamics may require incorporating fine-scale rooting differences between these functional groups and realistic stochastic climate drivers into future models. PMID:23950900
Correction to verdonck and tuerlinckx (2014).
2015-01-01
Reports an error in "The Ising Decision Maker: A binary stochastic network for choice response time" by Stijn Verdonck and Francis Tuerlinckx (Psychological Review, 2014[Jul], Vol 121[3], 422-462). An inaccurate assumption in Appendix B (provided in the erratum) led to an oversimplified result in Equation 18 (the diffusion equations associated with the microscopically defined dynamics). The authors sincerely thank Rani Moran for making them aware of the problem. Only the expression of the diffusion coefficient D is incorrect, and should be changed, as indicated in the erratum. Both the cause of the problem and the solution are also explained in the erratum. (The following abstract of the original article appeared in record 2014-31650-006.) The Ising Decision Maker (IDM) is a new formal model for speeded two-choice decision making derived from the stochastic Hopfield network or dynamic Ising model. On a microscopic level, it consists of 2 pools of binary stochastic neurons with pairwise interactions. Inside each pool, neurons excite each other, whereas between pools, neurons inhibit each other. The perceptual input is represented by an external excitatory field. Using methods from statistical mechanics, the high-dimensional network of neurons (microscopic level) is reduced to a two-dimensional stochastic process, describing the evolution of the mean neural activity per pool (macroscopic level). The IDM can be seen as an abstract, analytically tractable multiple attractor network model of information accumulation. In this article, the properties of the IDM are studied, the relations to existing models are discussed, and it is shown that the most important basic aspects of two-choice response time data can be reproduced. In addition, the IDM is shown to predict a variety of observed psychophysical relations such as Piéron's law, the van der Molen-Keuss effect, and Weber's law. Using Bayesian methods, the model is fitted to both simulated and real data, and its performance is compared to the Ratcliff diffusion model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Optimizing information flow in small genetic networks. IV. Spatial coupling
NASA Astrophysics Data System (ADS)
Sokolowski, Thomas R.; Tkačik, Gašper
2015-06-01
We typically think of cells as responding to external signals independently by regulating their gene expression levels, yet they often locally exchange information and coordinate. Can such spatial coupling be of benefit for conveying signals subject to gene regulatory noise? Here we extend our information-theoretic framework for gene regulation to spatially extended systems. As an example, we consider a lattice of nuclei responding to a concentration field of a transcriptional regulator (the input) by expressing a single diffusible target gene. When input concentrations are low, diffusive coupling markedly improves information transmission; optimal gene activation functions also systematically change. A qualitatively different regulatory strategy emerges where individual cells respond to the input in a nearly steplike fashion that is subsequently averaged out by strong diffusion. While motivated by early patterning events in the Drosophila embryo, our framework is generically applicable to spatially coupled stochastic gene expression models.
Stochastic Estimation of Arm Mechanical Impedance During Robotic Stroke Rehabilitation
Palazzolo, Jerome J.; Ferraro, Mark; Krebs, Hermano Igo; Lynch, Daniel; Volpe, Bruce T.; Hogan, Neville
2009-01-01
This paper presents a stochastic method to estimate the multijoint mechanical impedance of the human arm suitable for use in a clinical setting, e.g., with persons with stroke undergoing robotic rehabilitation for a paralyzed arm. In this context, special circumstances such as hypertonicity and tissue atrophy due to disuse of the hemiplegic limb must be considered. A low-impedance robot was used to bring the upper limb of a stroke patient to a test location, generate force perturbations, and measure the resulting motion. Methods were developed to compensate for input signal coupling at low frequencies apparently due to human–machine interaction dynamics. Data was analyzed by spectral procedures that make no assumption about model structure. The method was validated by measuring simple mechanical hardware and results from a patient's hemiplegic arm are presented. PMID:17436881
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
Sources of PCR-induced distortions in high-throughput sequencing data sets
Kebschull, Justus M.; Zador, Anthony M.
2015-01-01
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
NASA Astrophysics Data System (ADS)
Kearsey, Tim; Williams, John; Finlayson, Andrew; Williamson, Paul; Dobbs, Marcus; Kingdon, Andrew; Campbell, Diarmad
2014-05-01
Geological maps and 3D models usually depict lithostragraphic units which can comprise of many different types of sediment (lithologies). The lithostratigraphic units shown on maps and 3D models of glacial and post glacial deposits in Glasgow are substantially defined by the method of the formation and age of the unit rather than its lithological composition. Therefore, a simple assumption that the dominant lithology is the most common constituent of any stratigraphic unit is erroneous and is only 58% predictive of the actual sediment types seen in a borehole. This is problematic for non-geologist such as planners, regulators and engineers attempting to use these models to inform their decisions and can lead to such users viewing maps and models as of limited use in such decision making. We explore the extent to which stochastic modelling can help to make geological models more predictive of lithology in heterolithic units. Stochastic modelling techniques are commonly used to model facies variations in oil field models. The techniques have been applied to an area containing >4000 coded boreholes to investigate the glacial and fluvial deposits in the centre of the city of Glasgow. We test the predictions from this method by deleting percentages of the control data and re-running the simulations to determine how predictability varies with data density. We also explore the best way of displaying such stochastic models to and suggest that displaying the data as probability maps rather than a single definitive answer better illustrates the uncertainties inherent in the input data. Finally we address whether is it possible truly to be able to predict lithology in such geological facies. The innovative Accessing Subsurface Knowledge (ASK) network was recently established in the Glasgow are by the British Geological Survey and Glasgow City Council to deliver and exchange subsurface data and knowledge. This provides an idea opportunity to communicate and test a range of models and to assess their usefulness and impact on a vibrant community of public and private sector partners and decision makers.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan
2017-01-01
Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.
Development of a Sediment Transport Component for DHSVM
NASA Astrophysics Data System (ADS)
Doten, C. O.; Bowling, L. C.; Maurer, E. P.; Voisin, N.; Lettenmaier, D. P.
2003-12-01
The effect of forest management and disturbance on aquatic resources is a problem of considerable, contemporary, scientific and public concern in the West. Sediment generation is one of the factors linking land surface conditions with aquatic systems, with implications for fisheries protection and enhancement. Better predictive techniques that allow assessment of the effects of fire and logging, in particular, on sediment transport could help to provide a more scientific basis for the management of forests in the West. We describe the development of a sediment transport component for the Distributed Hydrology Soil Vegetation Model (DHSVM), a spatially distributed hydrologic model that was developed specifically for assessment of the hydrologic consequences of forest management. The sediment transport module extends the hydrologic dynamics of DHSVM to predict sediment generation in response to dynamic meteorological inputs and hydrologic conditions via mass wasting and surface erosion from forest roads and hillslopes. The mass wasting component builds on existing stochastic slope stability models, by incorporating distributed basin hydrology (from DHSVM), and post-failure, rule-based redistribution of sediment downslope. The stochastic nature of the mass wasting component allows specification of probability distributions that describe the spatial variability of soil and vegetation characteristics used in the infinite slope model. The forest roads and hillslope surface erosion algorithms account for erosion from rain drop impact and overland erosion. A simple routing scheme is used to transport eroded sediment from mass wasting and forest roads surface erosion that reaches the channel system to the basin outlet. A sensitivity analysis of the model input parameters and forest cover conditions is described for the Little Wenatchee River basin in the northeastern Washington Cascades.
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
Spike timing precision of neuronal circuits.
Kilinc, Deniz; Demir, Alper
2018-06-01
Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
Efficiency and Productivity Analysis of Multidivisional Firms
NASA Astrophysics Data System (ADS)
Gong, Binlei
Multidivisional firms are those who have footprints in multiple segments and hence using multiple technologies to convert inputs to outputs, which makes it difficult to estimate the resource allocations, aggregated production functions, and technical efficiencies of this type of companies. This dissertation aims to explore and reveal such unobserved information by several parametric and semiparametric stochastic frontier analyses and some other structural models. In the empirical study, this dissertation analyzes the productivity and efficiency for firms in the global oilfield market.
Stochastic Simulations of Long-Range Forecasting Models for Less Developed Regions
1975-06-01
descriptors—nation national alignment, internal insl; less developed regions of Africa, report describes (1) the regions’ (2) the strategic importance of...imr.T.ARSTFTi?n SPI unty ( I,is*iif it at i 3200.0 (Att ] to End l) Mar 7, 66 *. ( * y. o 1 < n i Forecasting for Planning Strategic Importance...the long range. The forecasts that have been produced so far have been direct inputs into the Joint Long-Range Strategic Study (JLRSS), prepared by
NASA Astrophysics Data System (ADS)
Gao, Feng-Yin; Kang, Yan-Mei; Chen, Xi; Chen, Guanrong
2018-05-01
This paper reveals the effect of fractional Gaussian noise with Hurst exponent H ∈(1 /2 ,1 ) on the information capacity of a general nonlinear neuron model with binary signal input. The fGn and its corresponding fractional Brownian motion exhibit long-range, strong-dependent increments. It extends standard Brownian motion to many types of fractional processes found in nature, such as the synaptic noise. In the paper, for the subthreshold binary signal, sufficient conditions are given based on the "forbidden interval" theorem to guarantee the occurrence of stochastic resonance, while for the suprathreshold binary signal, the simulated results show that additive fGn with Hurst exponent H ∈(1 /2 ,1 ) could increase the mutual information or bits count. The investigation indicated that the synaptic noise with the characters of long-range dependence and self-similarity might be the driving factor for the efficient encoding and decoding of the nervous system.
Use of behavioural stochastic resonance by paddle fish for feeding
NASA Astrophysics Data System (ADS)
Russell, David F.; Wilkens, Lon A.; Moss, Frank
1999-11-01
Stochastic resonance is the phenomenon whereby the addition of an optimal level of noise to a weak information-carrying input to certain nonlinear systems can enhance the information content at their outputs. Computer analysis of spike trains has been needed to reveal stochastic resonance in the responses of sensory receptors except for one study on human psychophysics. But is an animal aware of, and can it make use of, the enhanced sensory information from stochastic resonance? Here, we show that stochastic resonance enhances the normal feeding behaviour of paddlefish (Polyodon spathula), which use passive electroreceptors to detect electrical signals from planktonic prey. We demonstrate significant broadening of the spatial range for the detection of plankton when a noisy electric field of optimal amplitude is applied in the water. We also show that swarms of Daphnia plankton are a natural source of electrical noise. Our demonstration of stochastic resonance at the level of a vital animal behaviour, feeding, which has probably evolved for functional success, provides evidence that stochastic resonance in sensory nervous systems is an evolutionary adaptation.
NASA Astrophysics Data System (ADS)
El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.
2015-10-01
The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.
STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python
Wils, Stefan; Schutter, Erik De
2008-01-01
We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245
Linear system identification via backward-time observer models
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1993-01-01
This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.
Mankin, Romi; Rekker, Astrid
2016-12-01
The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.
Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise
NASA Astrophysics Data System (ADS)
Mankin, Romi; Rekker, Astrid
2016-12-01
The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.
Information-geometric measures as robust estimators of connection strengths and external inputs.
Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi
2009-08-01
Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.
Stochastic Models for Laser Propagation in Atmospheric Turbulence.
NASA Astrophysics Data System (ADS)
Leland, Robert Patton
In this dissertation, stochastic models for laser propagation in atmospheric turbulence are considered. A review of the existing literature on laser propagation in the atmosphere and white noise theory is presented, with a view toward relating the white noise integral and Ito integral approaches. The laser beam intensity is considered as the solution to a random Schroedinger equation, or forward scattering equation. This model is formulated in a Hilbert space context as an abstract bilinear system with a multiplicative white noise input, as in the literature. The model is also modeled in the Banach space of Fresnel class functions to allow the plane wave case and the application of path integrals. Approximate solutions to the Schroedinger equation of the Trotter-Kato product form are shown to converge for each white noise sample path. The product forms are shown to be physical random variables, allowing an Ito integral representation. The corresponding Ito integrals are shown to converge in mean square, providing a white noise basis for the Stratonovich correction term associated with this equation. Product form solutions for Ornstein -Uhlenbeck process inputs were shown to converge in mean square as the input bandwidth was expanded. A digital simulation of laser propagation in strong turbulence was used to study properties of the beam. Empirical distributions for the irradiance function were estimated from simulated data, and the log-normal and Rice-Nakagami distributions predicted by the classical perturbation methods were seen to be inadequate. A gamma distribution fit the simulated irradiance distribution well in the vicinity of the boresight. Statistics of the beam were seen to converge rapidly as the bandwidth of an Ornstein-Uhlenbeck process was expanded to its white noise limit. Individual trajectories of the beam were presented to illustrate the distortion and bending of the beam due to turbulence. Feynman path integrals were used to calculate an approximate expression for the mean of the beam intensity without using the Markov, or white noise, assumption, and to relate local variations in the turbulence field to the behavior of the beam by means of two approximations.
NASA Astrophysics Data System (ADS)
Barreiro, Andrea K.; Ly, Cheng
2017-08-01
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
NASA Astrophysics Data System (ADS)
Fang, Wei; Huang, Shengzhi; Huang, Qiang; Huang, Guohe; Meng, Erhao; Luan, Jinkai
2018-06-01
In this study, reference evapotranspiration (ET0) forecasting models are developed for the least economically developed regions subject to meteorological data scarcity. Firstly, the partial mutual information (PMI) capable of capturing the linear and nonlinear dependence is investigated regarding its utility to identify relevant predictors and exclude those that are redundant through the comparison with partial linear correlation. An efficient input selection technique is crucial for decreasing model data requirements. Then, the interconnection between global climate indices and regional ET0 is identified. Relevant climatic indices are introduced as additional predictors to comprise information regarding ET0, which ought to be provided by meteorological data unavailable. The case study in the Jing River and Beiluo River basins, China, reveals that PMI outperforms the partial linear correlation in excluding the redundant information, favouring the yield of smaller predictor sets. The teleconnection analysis identifies the correlation between Nino 1 + 2 and regional ET0, indicating influences of ENSO events on the evapotranspiration process in the study area. Furthermore, introducing Nino 1 + 2 as predictors helps to yield more accurate ET0 forecasts. A model performance comparison also shows that non-linear stochastic models (SVR or RF with input selection through PMI) do not always outperform linear models (MLR with inputs screen by linear correlation). However, the former can offer quite comparable performance depending on smaller predictor sets. Therefore, efforts such as screening model inputs through PMI and incorporating global climatic indices interconnected with ET0 can benefit the development of ET0 forecasting models suitable for data-scarce regions.
A compound memristive synapse model for statistical learning through STDP in spiking neural networks
Bill, Johannes; Legenstein, Robert
2014-01-01
Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP) with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network's spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic architectures. PMID:25565943
The use of Meteonorm weather generator for climate change studies
NASA Astrophysics Data System (ADS)
Remund, J.; Müller, S. C.; Schilter, C.; Rihm, B.
2010-09-01
The global climatological database Meteonorm (www.meteonorm.com) is widely used as meteorological input for simulation of solar applications and buildings. It's a combination of a climate database, a spatial interpolation tool and a stochastic weather generator. Like this typical years with hourly or minute time resolution can be calculated for any site. The input of Meteonorm for global radiation is the Global Energy Balance Archive (GEBA, http://proto-geba.ethz.ch). All other meteorological parameters are taken from databases of WMO and NCDC (periods 1961-90 and 1996-2005). The stochastic generation of global radiation is based on a Markov chain model for daily values and an autoregressive model for hourly and minute values (Aguiar and Collares-Pereira, 1988 and 1992). The generation of temperature is based on global radiation and measured distribution of daily temperature values of approx. 5000 sites. Meteonorm generates also additional parameters like precipitation, wind speed or radiation parameters like diffuse and direct normal irradiance. Meteonorm can also be used for climate change studies. Instead of climate values, the results of IPCC AR4 results are used as input. From all 18 public models an average has been made at a resolution of 1°. The anomalies of the parameters temperature, precipitation and global radiation and the three scenarios B1, A1B and A2 have been included. With the combination of Meteonorm's current database 1961-90, the interpolation algorithms and the stochastic generation typical years can be calculated for any site, for different scenarios and for any period between 2010 and 2200. From the analysis of variations of year to year and month to month variations of temperature, precipitation and global radiation of the past ten years as well of climate model forecasts (from project prudence, http://prudence.dmi.dk) a simple autoregressive model has been formed which is used to generate realistic monthly time series of future periods. Meteonorm can therefore be used as a relatively simple method to enhance the spatial and temporal resolution instead of using complicated and time consuming downscaling methods based on regional climate models. The combination of Meteonorm, gridded historical (based on work of Luterbach et al.) and IPCC results has been used for studies of vegetation simulation between 1660 and 2600 (publication of first version based on IS92a scenario and limited time period 1950 - 2100: http://www.pbl.nl/images/H5_Part2_van%20CCE_opmaak%28def%29_tcm61-46625.pdf). It's also applicable for other adaptation studies for e.g. road surfaces or building simulation. In Meteonorm 6.1 one scenario (IS92a) and one climate model has been included (Hadley CM3). In the new Meteonorm 7 (coming spring 2011) the model averages of the three above mentioned scenarios of the IPCC AR4 will be included.
Amygdala and Ventral Striatum Make Distinct Contributions to Reinforcement Learning.
Costa, Vincent D; Dal Monte, Olga; Lucas, Daniel R; Murray, Elisabeth A; Averbeck, Bruno B
2016-10-19
Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL, we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with an RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys' choice reaction times, which emphasized a speed-accuracy trade-off that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. Published by Elsevier Inc.
Amygdala and ventral striatum make distinct contributions to reinforcement learning
Costa, Vincent D.; Monte, Olga Dal; Lucas, Daniel R.; Murray, Elisabeth A.; Averbeck, Bruno B.
2016-01-01
Summary Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with a RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys’ choice reaction times, which emphasized a speed-accuracy tradeoff that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. PMID:27720488
NASA Astrophysics Data System (ADS)
Liao, Q.; Tchelepi, H.; Zhang, D.
2015-12-01
Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.
Inverse Stochastic Resonance in Cerebellar Purkinje Cells
Häusser, Michael; Gutkin, Boris S.; Roth, Arnd
2016-01-01
Purkinje neurons play an important role in cerebellar computation since their axons are the only projection from the cerebellar cortex to deeper cerebellar structures. They have complex internal dynamics, which allow them to fire spontaneously, display bistability, and also to be involved in network phenomena such as high frequency oscillations and travelling waves. Purkinje cells exhibit type II excitability, which can be revealed by a discontinuity in their f-I curves. We show that this excitability mechanism allows Purkinje cells to be efficiently inhibited by noise of a particular variance, a phenomenon known as inverse stochastic resonance (ISR). While ISR has been described in theoretical models of single neurons, here we provide the first experimental evidence for this effect. We find that an adaptive exponential integrate-and-fire model fitted to the basic Purkinje cell characteristics using a modified dynamic IV method displays ISR and bistability between the resting state and a repetitive activity limit cycle. ISR allows the Purkinje cell to operate in different functional regimes: the all-or-none toggle or the linear filter mode, depending on the variance of the synaptic input. We propose that synaptic noise allows Purkinje cells to quickly switch between these functional regimes. Using mutual information analysis, we demonstrate that ISR can lead to a locally optimal information transfer between the input and output spike train of the Purkinje cell. These results provide the first experimental evidence for ISR and suggest a functional role for ISR in cerebellar information processing. PMID:27541958
A Stochastic Simulator of a Blood Product Donation Environment with Demand Spikes and Supply Shocks
An, Ming-Wen; Reich, Nicholas G.; Crawford, Stephen O.; Brookmeyer, Ron; Louis, Thomas A.; Nelson, Kenrad E.
2011-01-01
The availability of an adequate blood supply is a critical public health need. An influenza epidemic or another crisis affecting population mobility could create a critical donor shortage, which could profoundly impact blood availability. We developed a simulation model for the blood supply environment in the United States to assess the likely impact on blood availability of factors such as an epidemic. We developed a simulator of a multi-state model with transitions among states. Weekly numbers of blood units donated and needed were generated by negative binomial stochastic processes. The simulator allows exploration of the blood system under certain conditions of supply and demand rates, and can be used for planning purposes to prepare for sudden changes in the public's health. The simulator incorporates three donor groups (first-time, sporadic, and regular), immigration and emigration, deferral period, and adjustment factors for recruitment. We illustrate possible uses of the simulator by specifying input values for an -week flu epidemic, resulting in a moderate supply shock and demand spike (for example, from postponed elective surgeries), and different recruitment strategies. The input values are based in part on data from a regional blood center of the American Red Cross during –. Our results from these scenarios suggest that the key to alleviating deficit effects of a system shock may be appropriate timing and duration of recruitment efforts, in turn depending critically on anticipating shocks and rapidly implementing recruitment efforts. PMID:21814550
On the selection of user-defined parameters in data-driven stochastic subspace identification
NASA Astrophysics Data System (ADS)
Priori, C.; De Angelis, M.; Betti, R.
2018-02-01
The paper focuses on the time domain output-only technique called Data-Driven Stochastic Subspace Identification (DD-SSI); in order to identify modal models (frequencies, damping ratios and mode shapes), the role of its user-defined parameters is studied, and rules to determine their minimum values are proposed. Such investigation is carried out using, first, the time histories of structural responses to stationary excitations, with a large number of samples, satisfying the hypothesis on the input imposed by DD-SSI. Then, the case of non-stationary seismic excitations with a reduced number of samples is considered. In this paper, partitions of the data matrix different from the one proposed in the SSI literature are investigated, together with the influence of different choices of the weighting matrices. The study is carried out considering two different applications: (1) data obtained from vibration tests on a scaled structure and (2) in-situ tests on a reinforced concrete building. Referring to the former, the identification of a steel frame structure tested on a shaking table is performed using its responses in terms of absolute accelerations to a stationary (white noise) base excitation and to non-stationary seismic excitations of low intensity. Black-box and modal models are identified in both cases and the results are compared with those from an input-output subspace technique. With regards to the latter, the identification of a complex hospital building is conducted using data obtained from ambient vibration tests.
A stochastic simulator of a blood product donation environment with demand spikes and supply shocks.
An, Ming-Wen; Reich, Nicholas G; Crawford, Stephen O; Brookmeyer, Ron; Louis, Thomas A; Nelson, Kenrad E
2011-01-01
The availability of an adequate blood supply is a critical public health need. An influenza epidemic or another crisis affecting population mobility could create a critical donor shortage, which could profoundly impact blood availability. We developed a simulation model for the blood supply environment in the United States to assess the likely impact on blood availability of factors such as an epidemic. We developed a simulator of a multi-state model with transitions among states. Weekly numbers of blood units donated and needed were generated by negative binomial stochastic processes. The simulator allows exploration of the blood system under certain conditions of supply and demand rates, and can be used for planning purposes to prepare for sudden changes in the public's health. The simulator incorporates three donor groups (first-time, sporadic, and regular), immigration and emigration, deferral period, and adjustment factors for recruitment. We illustrate possible uses of the simulator by specifying input values for an 8-week flu epidemic, resulting in a moderate supply shock and demand spike (for example, from postponed elective surgeries), and different recruitment strategies. The input values are based in part on data from a regional blood center of the American Red Cross during 1996-2005. Our results from these scenarios suggest that the key to alleviating deficit effects of a system shock may be appropriate timing and duration of recruitment efforts, in turn depending critically on anticipating shocks and rapidly implementing recruitment efforts.
Refractory pulse counting processes in stochastic neural computers.
McNeill, Dean K; Card, Howard C
2005-03-01
This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.
Factors leading to different viability predictions for a grizzly bear data set
Mills, L.S.; Hayes, S.G.; Wisdom, M.J.; Citta, J.; Mattson, D.J.; Murphy, K.
1996-01-01
Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear (Ursus arctos horribilis) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes
NASA Astrophysics Data System (ADS)
Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd
2016-04-01
In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.
NASA Astrophysics Data System (ADS)
Aronica, G. T.; Candela, A.
2007-12-01
SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.
Quasi-continuous stochastic simulation framework for flood modelling
NASA Astrophysics Data System (ADS)
Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas
2017-04-01
Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.
A theory of vibrational prey localization in two dimensions: the sand scorpion
NASA Astrophysics Data System (ADS)
van Hemmen, J. Leo
2000-03-01
Sand scorpions, and many other arachnids, find their prey at night by localizing the source of mechanical waves produced by prey movements. Substrate vibrations propagating through sand evoke stimulus-locked action potentials from slit sensilla on the scorpion's eight `feet' (tarsi). We present a neuronal model to account for stimulus angle determination in a two-dimensional plane using a population of second-order neurons, each receiving excitatory input from one tarsus and inhibition from a triad opposite to it. This input opens a time window whose width determines the firing probability of each of the second-order neurons. The population then `votes' for the direction. Stochastic resonance is realized through tuning the balance between excitation and inhibition. The agreement with behavioral experiments on sand scorpions is excellent.
Line-of-sight pointing accuracy/stability analysis and computer simulation for small spacecraft
NASA Astrophysics Data System (ADS)
Algrain, Marcelo C.; Powers, Richard M.
1996-06-01
This paper presents a case study where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. The simulation is implemented using XMATH/SystemBuild software from Integrated Systems, Inc. The paper is written in a tutorial manner and models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). THe predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are desired attitude angles and rate setpoints. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade-off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno
2008-01-01
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, M.; Edwards, H. C.; Hu, J.
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
D'Elia, M.; Edwards, H. C.; Hu, J.; ...
2018-01-18
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
Extended H2 synthesis for multiple degree-of-freedom controllers
NASA Technical Reports Server (NTRS)
Hampton, R. David; Knospe, Carl R.
1992-01-01
H2 synthesis techniques are developed for a general multiple-input-multiple-output (MIMO) system subject to both stochastic and deterministic disturbances. The H2 synthesis is extended by incorporation of anticipated disturbances power-spectral-density information into the controller-design process, as well as by frequency weightings of generalized coordinates and control inputs. The methodology is applied to a simple single-input-multiple-output (SIMO) problem, analogous to the type of vibration isolation problem anticipated in microgravity research experiments.
NASA Astrophysics Data System (ADS)
Sun, Lianming; Sano, Akira
Output over-sampling based closed-loop identification algorithm is investigated in this paper. Some instinct properties of the continuous stochastic noise and the plant input, output in the over-sampling approach are analyzed, and they are used to demonstrate the identifiability in the over-sampling approach and to evaluate its identification performance. Furthermore, the selection of plant model order, the asymptotic variance of estimated parameters and the asymptotic variance of frequency response of the estimated model are also explored. It shows that the over-sampling approach can guarantee the identifiability and improve the performance of closed-loop identification greatly.
Erickson, Collin B; Ankenman, Bruce E; Sanchez, Susan M
2018-06-01
This data article provides the summary data from tests comparing various Gaussian process software packages. Each spreadsheet represents a single function or type of function using a particular input sample size. In each spreadsheet, a row gives the results for a particular replication using a single package. Within each spreadsheet there are the results from eight Gaussian process model-fitting packages on five replicates of the surface. There is also one spreadsheet comparing the results from two packages performing stochastic kriging. These data enable comparisons between the packages to determine which package will give users the best results.
Integrated assessment of future land use in Brazil under increasing demand for bioenergy
NASA Astrophysics Data System (ADS)
Verstegen, Judith; van der Hilst, Floor; Karssenberg, Derek; Faaij, André
2014-05-01
Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatiotemporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, four model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) an economic model to project the quantity of change of all land uses, 3) a spatially explicit land use model that determines the location of change of all land uses, and 4) various analysis to determine the impacts of these changes on water, socio-economics, and biodiversity. All four model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2012 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the shape of sugar cane and other land use patches are uncertain, not so much the location. From the economic model we can derive that dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. If the intensity of the livestock sector is not increased future projections show a large loss of natural vegetation. Impacts on water are not that large, except when irrigation is applied on the expanded cropland.
Analysis of the performance of a wireless optical multi-input to multi-output communication system.
Bushuev, Denis; Arnon, Shlomi
2006-07-01
We investigate robust optical wireless communication in a highly scattering propagation medium using multielement optical detector arrays. The communication setup consists of synchronized multiple transmitters that send information to a receiver array and an atmospheric propagation channel. The mathematical model that best describes this scenario is multi-input to multi-output communication through stochastic slow changing channels. In this model, signals from m transmitters are received by n receiver-detectors. The channel transfer function matrix is G, and its size is n x m. G(i,j) is the transfer function from transmitter i to detector j, and m > or = n. We adopt a quasi-stationary approach in which the channel time variation has a negligible effect on communication performance over a burst. The G matrix is calculated on the basis of the optical transfer function of the atmospheric channel (composed of aerosol and turbulence elements) and the receiver's optics. In this work we derive a performance model using environmental data, such as documented turbulence and aerosol models and noise statistics. We also present the results of simulations conducted for the proposed detection algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Qing, E-mail: qing.gao.chance@gmail.com; Dong, Daoyi, E-mail: daoyidong@gmail.com; Petersen, Ian R., E-mail: i.r.petersen@gmai.com
The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.
NASA Astrophysics Data System (ADS)
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.
1999-11-26
basic goal of the analysis . In other respects, however, the two approaches differ. Harper and Labianca began by modeling the input stochastic processes...contribution. To facilitate the analysis , however, he placed the receivers at a common depth and was, thus, unable to examine the vertical aspects of...v.p-ikovtot’ILW ... ft a FodA H+TIWAI),** *. «) = -^rf"« x { *+*wa«,> ... , % ;/<„’, • (»D 4-6.5 Bragg-Only Constraint For v < 1 — U
NASA Astrophysics Data System (ADS)
Moulds, S.; Djordjevic, S.; Savic, D.
2017-12-01
The Global Change Assessment Model (GCAM), an integrated assessment model, provides insight into the interactions and feedbacks between physical and human systems. The land system component of GCAM, which simulates land use activities and the production of major crops, produces output at the subregional level which must be spatially downscaled in order to use with gridded impact assessment models. However, existing downscaling routines typically consider cropland as a homogeneous class and do not provide information about land use intensity or specific management practices such as irrigation and multiple cropping. This paper presents a spatial allocation procedure to downscale crop production data from GCAM to a spatial grid, producing a time series of maps which show the spatial distribution of specific crops (e.g. rice, wheat, maize) at four input levels (subsistence, low input rainfed, high input rainfed and high input irrigated). The model algorithm is constrained by available cropland at each time point and therefore implicitly balances extensification and intensification processes in order to meet global food demand. It utilises a stochastic approach such that an increase in production of a particular crop is more likely to occur in grid cells with a high biophysical suitability and neighbourhood influence, while a fall in production will occur more often in cells with lower suitability. User-supplied rules define the order in which specific crops are downscaled as well as allowable transitions. A regional case study demonstrates the ability of the model to reproduce historical trends in India by comparing the model output with district-level agricultural inventory data. Lastly, the model is used to predict the spatial distribution of crops globally under various GCAM scenarios.
Leveraging human decision making through the optimal management of centralized resources
NASA Astrophysics Data System (ADS)
Hyden, Paul; McGrath, Richard G.
2016-05-01
Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.
NASA Astrophysics Data System (ADS)
Couasnon, Anaïs; Sebastian, Antonia; Morales-Nápoles, Oswaldo
2017-04-01
Recent research has highlighted the increased risk of compound flooding in the U.S. In coastal catchments, an elevated downstream water level, resulting from high tide and/or storm surge, impedes drainage creating a backwater effect that may exacerbate flooding in the riverine environment. Catchments exposed to tropical cyclone activity along the Gulf of Mexico and Atlantic coasts are particularly vulnerable. However, conventional flood hazard models focus mainly on precipitation-induced flooding and few studies accurately represent the hazard associated with the interaction between discharge and elevated downstream water levels. This study presents a method to derive stochastic boundary conditions for a coastal watershed. Mean daily discharge and maximum daily residual water levels are used to build a non-parametric Bayesian network (BN) based on copulas. Stochastic boundary conditions for the watershed are extracted from the BN and input into a 1-D process-based hydraulic model to obtain water surface elevations in the main channel of the catchment. The method is applied to a section of the Houston Ship Channel (Buffalo Bayou) in Southeast Texas. Data at six stream gages and two tidal stations are used to build the BN and 100-year joint return period events are modeled. We find that the dependence relationship between the daily residual water level and the mean daily discharge in the catchment can be represented by a Gumbel copula (Spearman's rank correlation coefficient of 0.31) and that they result in higher water levels in the mid- to upstream reaches of the watershed than when modeled independently. This indicates that conventional (deterministic) methods may underestimate the flood hazard associated with compound flooding in the riverine environment and that such interactions should not be neglected in future coastal flood hazard studies.
Uncertainty assessment of future land use in Brazil under increasing demand for bioenergy
NASA Astrophysics Data System (ADS)
van der Hilst, F.; Verstegen, J. A.; Karssenberg, D.; Faaij, A.
2013-12-01
Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatio-temporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, three model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) a model to project the quantity of change of all land uses, and 3) a spatially explicit land use model that determines the location of change of all land uses. All three model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2011 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the locations of pastures are uncertain. Since the dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. Results of the data assimilation indicate that the drivers of location of the land uses vary over time (variations up to 50% in the importance of the drivers) making it difficult to find a solid stationary system representation. Overall, we conclude that projection up to 2030 is only of use for quantifying impacts that act on a larger aggregation level, because at local level uncertainty is too large.
Mixed H2/H∞ pitch control of wind turbine with a Markovian jump model
NASA Astrophysics Data System (ADS)
Lin, Zhongwei; Liu, Jizhen; Wu, Qiuwei; Niu, Yuguang
2018-01-01
This paper proposes a Markovian jump model and the corresponding H2/H∞ control strategy for the wind turbine driven by the stochastic switching wind speed, which can be used to regulate the generator speed in order to harvest the rated power while reducing the fatigue loads on the mechanical side of wind turbine. Through sampling the low-frequency wind speed data into separate intervals, the stochastic characteristic of the steady wind speed can be represented as a Markov process, while the high-frequency wind speed in the each interval is regarded as the disturbance input. Then, the traditional operating points of wind turbine can be divided into separate subregions correspondingly, where the model parameters and the control mode can be fixed in each mode. Then, the mixed H2/H∞ control problem is discussed for such a class of Markovian jump wind turbine working above the rated wind speed to guarantee both the disturbance rejection and the mechanical loads objectives, which can reduce the power volatility and the generator torque fluctuation of the whole transmission mechanism efficiently. Simulation results for a 2 MW wind turbine show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
NASA Astrophysics Data System (ADS)
Mathias, Jean-Denis; Rougé, Charles; Deffuant, Guillaume
2013-04-01
We present a simple stochastic model of lake eutrophication to demonstrate how the mathematical framework of viability theory fosters operational definitions of resilience, vulnerability and adaptive capacity, and then helps understand which response one should bring to environmental changes. The model represents the phosphorus dynamics, given that high concentrations trigger a regime change from oligotrophic to eutrophic, and causes ecological but also economic losses, for instance from tourism. Phosphorus comes from agricultural inputs upstream of the lake, and we will consider a stochastic input. We consider the system made of both the lake and its upstream region, and explore how to maintain the desirable ecological and economic properties of this system. In the viability framework, we translate these desirable properties into state constraints, then examine how, given the dynamics of the model and the available policy options, the properties can be kept. The set of states for which there exists a policy to keep the properties is called the viability kernel. We extend this framework to both major perturbations and long-term environmental changes. In our model, since the phosphorus inputs and outputs from the lake depend on rainfall, we will focus on extreme rainfall events and long-term changes in the rainfall regime. They can be described as changes in the state of the system, and may displace it outside the viability kernel. Its response can then be described using the concepts of resilience, vulnerability and adaptive capacity. Resilience is the capacity to recover by getting back to the viability kernel where the dynamics keep the system safe, and in this work we assume it to be the first objective of management. Computed for a given trajectory, vulnerability is a measure of the consequence of violating a property. We propose a family of functions from which cost functions and other vulnerability indicators can be derived for any trajectory. There can be several vulnerability functions, representing for instance social, economic or ecological vulnerability, and each representing the violation of the associated property, but these functions need to be ultimately aggregated as a single indicator. Due to the stochastic nature of the system, there is a range of possible trajectories. Statistics can be derived from the probability distribution of the vulnerability of the trajectories. Dynamic programming methods can then yield the policies which, among available policies, minimize a given trajectory. Thus, this viability framework gives indication on both the possible consequences of a hazard or an environmental change, and on the policies that can mitigate or avert it. It also enables to assess the benefits of extending the set of available policy options, and we define adaptive capacity as the reduction in a given vulnerability statistic due to the introduction of new policy options.
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Straube, Arthur V.; Grima, Ramon
2011-11-01
It is commonly believed that, whenever timescale separation holds, the predictions of reduced chemical master equations obtained using the stochastic quasi-steady-state approximation are in very good agreement with the predictions of the full master equations. We use the linear noise approximation to obtain a simple formula for the relative error between the predictions of the two master equations for the Michaelis-Menten reaction with substrate input. The reduced approach is predicted to overestimate the variance of the substrate concentration fluctuations by as much as 30%. The theoretical results are validated by stochastic simulations using experimental parameter values for enzymes involved in proteolysis, gluconeogenesis, and fermentation.
NASA Astrophysics Data System (ADS)
Maerker, Michael; Sommer, Christian; Zakerinejad, Reza; Cama, Elena
2017-04-01
Soil erosion by water is a significant problem in arid and semi arid areas of large parts of Iran. Water erosion is one of the most effective phenomena that leads to decreasing soil productivity and pollution of water resources. Especially in semiarid areas like in the Mazayjan watershed in the Southwestern Fars province as well as in the Mkomazi catchment in Kwa Zulu Natal, South Africa, gully erosion contributes to the sediment dynamics in a significant way. Consequently, the intention of this research is to identify the different types of soil erosion processes acting in the area with a stochastic approach and to assess the process dynamics in an integrative way. Therefore, we applied GIS, and satellite image analysis techniques to derive input information for the numeric models. For sheet and rill erosion the Unit Stream Power-based Erosion Deposition Model (USPED) was utilized. The spatial distribution of gully erosion was assessed using a statistical approach which used three variables (stream power index, slope, and flow accumulation) to predict the spatial distribution of gullies in the study area. The eroded gully volumes were estimated for a multiple years period by fieldwork and Google Earth high resolution images as well as with structure for motion algorithm. Finally, the gully retreat rates were integrated into the USPED model. The results show that the integration of the SPI approach to quantify gully erosion with the USPED model is a suitable method to qualitatively and quantitatively assess water erosion processes in data scarce areas. The application of GIS and stochastic model approaches to spatialize the USPED model input yield valuable results for the prediction of soil erosion in the test areas. The results of this research help to develop an appropriate management of soil and water resources in the study areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
2016-12-01
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
NASA Astrophysics Data System (ADS)
Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark
2018-04-01
Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.
Distributed multisensory integration in a recurrent network model through supervised learning
NASA Astrophysics Data System (ADS)
Wang, He; Wong, K. Y. Michael
Sensory integration between different modalities has been extensively studied. It is suggested that the brain integrates signals from different modalities in a Bayesian optimal way. However, how the Bayesian rule is implemented in a neural network remains under debate. In this work we propose a biologically plausible recurrent network model, which can perform Bayesian multisensory integration after trained by supervised learning. Our model is composed of two modules, each for one modality. We assume that each module is a recurrent network, whose activity represents the posterior distribution of each stimulus. The feedforward input on each module is the likelihood of each modality. Two modules are integrated through cross-links, which are feedforward connections from the other modality, and reciprocal connections, which are recurrent connections between different modules. By stochastic gradient descent, we successfully trained the feedforward and recurrent coupling matrices simultaneously, both of which resembles the Mexican-hat. We also find that there are more than one set of coupling matrices that can approximate the Bayesian theorem well. Specifically, reciprocal connections and cross-links will compensate each other if one of them is removed. Even though trained with two inputs, the network's performance with only one input is in good accordance with what is predicted by the Bayesian theorem.
Signal bi-amplification in networks of unidirectionally coupled MEMS
NASA Astrophysics Data System (ADS)
Tchakui, Murielle Vanessa; Woafo, Paul; Colet, Pere
2016-01-01
The purpose of this paper is to analyze the propagation and the amplification of an input signal in networks of unidirectionally coupled micro-electro-mechanical systems (MEMS). Two types of external excitations are considered: sinusoidal and stochastic signals. We show that sinusoidal signals are amplified up to a saturation level which depends on the transmission rate and despite MEMS being nonlinear the sinusoidal shape is well preserved if the number of MEMS is not too large. However, increasing the number of MEMS, there is an instability that leads to chaotic behavior and which is triggered by the amplification of the harmonics generated by the nonlinearities. We also show that for stochastic input signals, the MEMS array acts as a band-pass filter and after just a few elements the signal has a narrow power spectra.
Analytic Regularity and Polynomial Approximation of Parametric and Stochastic Elliptic PDEs
2010-05-31
Todor : Finite elements for elliptic problems with stochastic coefficients Comp. Meth. Appl. Mech. Engg. 194 (2005) 205-228. [14] R. Ghanem and P. Spanos...for elliptic partial differential equations with random input data SIAM J. Num. Anal. 46(2008), 2411–2442. [20] R. Todor , Robust eigenvalue computation...for smoothing operators, SIAM J. Num. Anal. 44(2006), 865– 878. [21] Ch. Schwab and R.A. Todor , Karhúnen-Loève Approximation of Random Fields by
NASA Astrophysics Data System (ADS)
Verzichelli, Gianluca
2016-08-01
An Availability Stochastic Model for the E-ELT has been developed in GeNIE. The latter is a Graphical User Interface (GUI) for the Structural Modeling, Inference, and Learning Engine (SMILE), originally distributed by the Decision Systems Laboratory from the University of Pittsburgh, and now being a product of Bayes Fusion, LLC. The E-ELT will be the largest optical/near-infrared telescope in the world. Its design comprises an Alt-Azimuth mount reflecting telescope with a 39-metre-diameter segmented primary mirror, a 4-metre-diameter secondary mirror, a 3.75-metre-diameter tertiary mirror, adaptive optics and multiple instruments. This paper highlights how a Model has been developed for an earlier on assessment of the Telescope Avail- ability. It also describes the modular structure and the underlying assumptions that have been adopted for developing the model and demonstrates the integration of FMEA, Influence Diagram and Bayesian Network elements. These have been considered for a better characterization of the Model inputs and outputs and for taking into account Degraded-based Reliability (DBR). Lastly, it provides an overview of how the information and knowledge captured in the model may be used for an earlier on definition of the Failure, Detection, Isolation and Recovery (FDIR) Control Strategy and the Telescope Minimum Master Equipment List (T-MMEL).
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Markets, Herding and Response to External Information.
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2015-01-01
We focus on the influence of external sources of information upon financial markets. In particular, we develop a stochastic agent-based market model characterized by a certain herding behavior as well as allowing traders to be influenced by an external dynamic signal of information. This signal can be interpreted as a time-varying advertising, public perception or rumor, in favor or against one of two possible trading behaviors, thus breaking the symmetry of the system and acting as a continuously varying exogenous shock. As an illustration, we use a well-known German Indicator of Economic Sentiment as information input and compare our results with Germany's leading stock market index, the DAX, in order to calibrate some of the model parameters. We study the conditions for the ensemble of agents to more accurately follow the information input signal. The response of the system to the external information is maximal for an intermediate range of values of a market parameter, suggesting the existence of three different market regimes: amplification, precise assimilation and undervaluation of incoming information.
Performance study of LMS based adaptive algorithms for unknown system identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javed, Shazia; Ahmad, Noor Atinah
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less
Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind
2016-01-01
Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points. PMID:27212008
Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind
2016-05-23
Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.
Risk management of a fund for natural disasters
NASA Astrophysics Data System (ADS)
Flores, C.
2003-04-01
Mexico is a country which has to deal with several natural disaster risks: earthquakes, droughts, volcanic eruptions, floods, slides, wild fires, extreme temperatures, etc. In order to reduce the country's vulnerability to the impact of these natural disasters and to support rapid recovery when they occur, the government established in 1996 Mexico's Fund for Natural Disasters (FONDEN). Since its creation, its resources have been insufficient to meet all government obligations. The aim of this project is the development of a dynamic strategy to optimise the management of a fund for natural disasters starting from the example of FONDEN. The problem of budgetary planning is being considered for the modelling. We control the level of the fund's cash (R_t)0<= t
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.
Méndez-Balbuena, Ignacio; Huidobro, Nayeli; Silva, Mayte; Flores, Amira; Trenado, Carlos; Quintanar, Luis; Arias-Carrión, Oscar; Kristeva, Rumyana; Manjarrez, Elias
2015-10-01
The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP. Copyright © 2015 the American Physiological Society.
The transfer functions of cardiac tissue during stochastic pacing.
de Lange, Enno; Kucera, Jan P
2009-01-01
The restitution properties of cardiac action potential duration (APD) and conduction velocity (CV) are important factors in arrhythmogenesis. They determine alternans, wavebreak, and the patterns of reentrant arrhythmias. We developed a novel approach to characterize restitution using transfer functions. Transfer functions relate an input and an output quantity in terms of gain and phase shift in the complex frequency domain. We derived an analytical expression for the transfer function of interbeat intervals (IBIs) during conduction from one site (input) to another site downstream (output). Transfer functions can be efficiently obtained using a stochastic pacing protocol. Using simulations of conduction and extracellular mapping of strands of neonatal rat ventricular myocytes, we show that transfer functions permit the quantification of APD and CV restitution slopes when it is difficult to measure APD directly. We find that the normally positive CV restitution slope attenuates IBI variations. In contrast, a negative CV restitution slope (induced by decreasing extracellular [K(+)]) amplifies IBI variations with a maximum at the frequency of alternans. Hence, it potentiates alternans and renders conduction unstable, even in the absence of APD restitution. Thus, stochastic pacing and transfer function analysis represent a powerful strategy to evaluate restitution and the stability of conduction.
Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces
NASA Astrophysics Data System (ADS)
Rinker, Jennifer M.
2016-09-01
This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.
Pan-European stochastic flood event set
NASA Astrophysics Data System (ADS)
Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav
2017-04-01
Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as predictands. Finally, the transfer functions are applied to a large ensemble of GCM simulations with forcing corresponding to present day climate conditions to generate highly resolved stochastic time series of precipitation and temperature for several thousand years. These time series form the input for the rainfall-runoff model developed by the UEA team. It is a spatially distributed model adapted from the HBV model and will be calibrated for individual basins using historical discharge data. The calibrated model will be driven by the precipitation time series generated by the KIT team to simulate discharges at a daily time step. The uncertainties in the simulated discharges will be analysed using multiple model parameter sets. A number of statistical methods will be used to assess return periods, changes in the magnitudes, changes in the characteristics of floods such as time base and time to peak, and spatial correlations of large flood events. The Pan-European flood stochastic event set will permit a better view of flood risk for market applications.
Forecasting monthly inflow discharge of the Iffezheim reservoir using data-driven models
NASA Astrophysics Data System (ADS)
Zhang, Qing; Aljoumani, Basem; Hillebrand, Gudrun; Hoffmann, Thomas; Hinkelmann, Reinhard
2017-04-01
River stream flow is an essential element in hydrology study fields, especially for reservoir management, since it defines input into reservoirs. Forecasting this stream flow plays an important role in short or long-term planning and management in the reservoir, e.g. optimized reservoir and hydroelectric operation or agricultural irrigation. Highly accurate flow forecasting can significantly reduce economic losses and is always pursued by reservoir operators. Therefore, hydrologic time series forecasting has received tremendous attention of researchers. Many models have been proposed to improve the hydrological forecasting. Due to the fact that most natural phenomena occurring in environmental systems appear to behave in random or probabilistic ways, different cases may need a different methods to forecast the inflow and even a unique treatment to improve the forecast accuracy. The purpose of this study is to determine an appropriate model for forecasting monthly inflow to the Iffezheim reservoir in Germany, which is the last of the barrages in the Upper Rhine. Monthly time series of discharges, measured from 1946 to 2001 at the Plittersdorf station, which is located 6 km downstream of the Iffezheim reservoir, were applied. The accuracies of the used stochastic models - Fiering model and Auto-Regressive Integrated Moving Average models (ARIMA) are compared with Artificial Intelligence (AI) models - single Artificial Neural Network (ANN) and Wavelet ANN models (WANN). The Fiering model is a linear stochastic model and used for generating synthetic monthly data. The basic idea in modeling time series using ARIMA is to identify a simple model with as few model parameters as possible in order to provide a good statistical fit to the data. To identify and fit the ARIMA models, four phase approaches were used: identification, parameter estimation, diagnostic checking, and forecasting. An automatic selection criterion, such as the Akaike information criterion, is utilized to enhance this flexible approach to set up the model. As distinct from both stochastic models, the ANN and its related conjunction methods Wavelet-ANN (WANN) models are effective to handle non-linear systems and have been developed with antecedent flows as inputs to forecast up to 12-months lead-time for the Iffezheim reservoir. In the ANN and WANN models, the Feed Forward Back Propagation method (FFBP) is applied. The sigmoid activity and linear functions were used with several different neurons for the hidden layers and for the output layer, respectively. To compare the accuracy of the different models and identify the most suitable model for reliable forecasting, four quantitative standard statistical performance evaluation measures, the root mean square error (RMSE), the mean bias error (MAE) and the determination correlation coefficient (DC), are employed. The results reveal that the ARIMA (2, 1, 2) performs better than Fiering, ANN and WANN models. Further, the WANN model is found to be slightly better than the ANN model for forecasting monthly inflow of the Iffezheim reservoir. As a result, by using the ARIMA model, the predicted and observed values agree reasonably well.
Modelling and performance analysis of clinical pathways using the stochastic process algebra PEPA.
Yang, Xian; Han, Rui; Guo, Yike; Bradley, Jeremy; Cox, Benita; Dickinson, Robert; Kitney, Richard
2012-01-01
Hospitals nowadays have to serve numerous patients with limited medical staff and equipment while maintaining healthcare quality. Clinical pathway informatics is regarded as an efficient way to solve a series of hospital challenges. To date, conventional research lacks a mathematical model to describe clinical pathways. Existing vague descriptions cannot fully capture the complexities accurately in clinical pathways and hinders the effective management and further optimization of clinical pathways. Given this motivation, this paper presents a clinical pathway management platform, the Imperial Clinical Pathway Analyzer (ICPA). By extending the stochastic model performance evaluation process algebra (PEPA), ICPA introduces a clinical-pathway-specific model: clinical pathway PEPA (CPP). ICPA can simulate stochastic behaviours of a clinical pathway by extracting information from public clinical databases and other related documents using CPP. Thus, the performance of this clinical pathway, including its throughput, resource utilisation and passage time can be quantitatively analysed. A typical clinical pathway on stroke extracted from a UK hospital is used to illustrate the effectiveness of ICPA. Three application scenarios are tested using ICPA: 1) redundant resources are identified and removed, thus the number of patients being served is maintained with less cost; 2) the patient passage time is estimated, providing the likelihood that patients can leave hospital within a specific period; 3) the maximum number of input patients are found, helping hospitals to decide whether they can serve more patients with the existing resource allocation. ICPA is an effective platform for clinical pathway management: 1) ICPA can describe a variety of components (state, activity, resource and constraints) in a clinical pathway, thus facilitating the proper understanding of complexities involved in it; 2) ICPA supports the performance analysis of clinical pathway, thereby assisting hospitals to effectively manage time and resources in clinical pathway.
A Neuronal Network Model for Pitch Selectivity and Representation
Huang, Chengcheng; Rinzel, John
2016-01-01
Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900
A Neuronal Network Model for Pitch Selectivity and Representation.
Huang, Chengcheng; Rinzel, John
2016-01-01
Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Wilson, R; Abbott, J H
2018-04-01
To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Taesam
2018-05-01
Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.
Truccolo, Wilson
2017-01-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity. PMID:28234899
Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson
2017-02-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity.
Critical Fluctuations in Cortical Models Near Instability
Aburn, Matthew J.; Holmes, C. A.; Roberts, James A.; Boonstra, Tjeerd W.; Breakspear, Michael
2012-01-01
Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human electroencephalography (EEG), however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where non-linearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power law scaling, and bistable switching have been suggested as generic indicators of the approach to bifurcation in non-linear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen–Rit model) of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations. PMID:22952464
SHEDS-HT: An Integrated Probabilistic Exposure Model for ...
United States Environmental Protection Agency (USEPA) researchers are developing a strategy for highthroughput (HT) exposure-based prioritization of chemicals under the ExpoCast program. These novel modeling approaches for evaluating chemicals based on their potential for biologically relevant human exposures will inform toxicity testing and prioritization for chemical risk assessment. Based on probabilistic methods and algorithms developed for The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals (SHEDS-MM), a new mechanistic modeling approach has been developed to accommodate high-throughput (HT) assessment of exposure potential. In this SHEDS-HT model, the residential and dietary modules of SHEDS-MM have been operationally modified to reduce the user burden, input data demands, and run times of the higher-tier model, while maintaining critical features and inputs that influence exposure. The model has been implemented in R; the modeling framework links chemicals to consumer product categories or food groups (and thus exposure scenarios) to predict HT exposures and intake doses. Initially, SHEDS-HT has been applied to 2507 organic chemicals associated with consumer products and agricultural pesticides. These evaluations employ data from recent USEPA efforts to characterize usage (prevalence, frequency, and magnitude), chemical composition, and exposure scenarios for a wide range of consumer products. In modeling indirec
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations
NASA Astrophysics Data System (ADS)
Savran, William Harvey
High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
Damage detection of structures identified with deterministic-stochastic models using seismic data.
Huang, Ming-Chih; Wang, Yen-Po; Chang, Ming-Lian
2014-01-01
A deterministic-stochastic subspace identification method is adopted and experimentally verified in this study to identify the equivalent single-input-multiple-output system parameters of the discrete-time state equation. The method of damage locating vector (DLV) is then considered for damage detection. A series of shaking table tests using a five-storey steel frame has been conducted. Both single and multiple damage conditions at various locations have been considered. In the system identification analysis, either full or partial observation conditions have been taken into account. It has been shown that the damaged stories can be identified from global responses of the structure to earthquakes if sufficiently observed. In addition to detecting damage(s) with respect to the intact structure, identification of new or extended damages of the as-damaged counterpart has also been studied. This study gives further insights into the scheme in terms of effectiveness, robustness, and limitation for damage localization of frame systems.
A Cobb Douglas Stochastic Frontier Model on Measuring Domestic Bank Efficiency in Malaysia
Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul
2012-01-01
Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005–2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time. PMID:22900009
A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.
Hua, Changchun; Zhang, Liuliu; Guan, Xinping
2017-01-01
This paper studies the problem of distributed output tracking consensus control for a class of high-order stochastic nonlinear multiagent systems with unknown nonlinear dead-zone under a directed graph topology. The adaptive neural networks are used to approximate the unknown nonlinear functions and a new inequality is used to deal with the completely unknown dead-zone input. Then, we design the controllers based on backstepping method and the dynamic surface control technique. It is strictly proved that the resulting closed-loop system is stable in probability in the sense of semiglobally uniform ultimate boundedness and the tracking errors between the leader and the followers approach to a small residual set based on Lyapunov stability theory. Finally, two simulation examples are presented to show the effectiveness and the advantages of the proposed techniques.
Effect of the Potential Shape on the Stochastic Resonance Processes
NASA Astrophysics Data System (ADS)
Kenmoé, G. Djuidjé; Ngouongo, Y. J. Wadop; Kofané, T. C.
2015-10-01
The stochastic resonance (SR) induced by periodic signal and white noises in a periodic nonsinusoidal potential is investigated. This phenomenon is studied as a function of the friction coefficient as well as the shape of the potential. It is done through an investigation of the hysteresis loop area which is equivalent to the input energy lost by the system to the environment per period of the external force. SR is evident in some range of the shape parameter of the potential, but cannot be observed in the other range. Specially, variation of the shape potential affects significantly and not trivially the heigh of the potential barrier in the Kramers rate as well as the occurrence of SR. The finding results show crucial dependence of the temperature of occurrence of SR on the shape of the potential. It is noted that the maximum of the input energy generally decreases when the friction coefficient is increased.
Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables.
Pugno, Nicola M; Bosia, Federico; Carpinteri, Alberto
2008-08-01
Thousands of multiscale stochastic simulations are carried out in order to perform the first in-silico tensile tests of carbon nanotube (CNT)-based macroscopic cables with varying length. The longest treated cable is the space-elevator megacable but more realistic shorter cables are also considered in this bottom-up investigation. Different sizes, shapes, and concentrations of defects are simulated, resulting in cable macrostrengths not larger than approximately 10 GPa, which is much smaller than the theoretical nanotube strength (approximately 100 GPa). No best-fit parameters are present in the multiscale simulations: the input at level 1 is directly estimated from nanotensile tests of CNTs, whereas its output is considered as the input for the level 2, and so on up to level 5, corresponding to the megacable. Thus, five hierarchical levels are used to span lengths from that of a single nanotube (approximately 100 nm) to that of the space-elevator megacable (approximately 100 Mm).
Observer-based state tracking control of uncertain stochastic systems via repetitive controller
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Susana Ramya, L.; Selvaraj, P.
2017-08-01
This paper develops the repetitive control scheme for state tracking control of uncertain stochastic time-varying delay systems via equivalent-input-disturbance approach. The main purpose of this work is to design a repetitive controller to guarantee the tracking performance under the effects of unknown disturbances with bounded frequency and parameter variations. Specifically, a new set of linear matrix inequality (LMI)-based conditions is derived based on the suitable Lyapunov-Krasovskii functional theory for designing a repetitive controller which guarantees stability and desired tracking performance. More precisely, an equivalent-input-disturbance estimator is incorporated into the control design to reduce the effect of the external disturbances. Simulation results are provided to demonstrate the desired control system stability and their tracking performance. A practical stream water quality preserving system is also provided to show the effectiveness and advantage of the proposed approach.
NASA Astrophysics Data System (ADS)
Morales, Y.; Olivares, M. A.; Vargas, X.
2015-12-01
This research aims to improve the representation of stochastic water inflows to hydropower plants used in a grid-wide, power production scheduling model in central Chile. The model prescribes the operation of every plant in the system, including hydropower plants located in several basins, and uses stochastic dual dynamic programming (SDDP) with possible inflow scenarios defined from historical records. Each year of record is treated as a sample of weekly inflows to power plants, assuming this intrinsically incorporates spatial and temporal correlations, without any further autocorrelation analysis of the hydrological time series. However, standard good practice suggests the use of synthetic flows instead of raw historical records.The proposed approach generates synthetic inflow scenarios based on hydrological modeling of a few basins in the system and transposition of flows with other basins within so-called homogeneous zones. Hydrologic models use precipitation and temperature as inputs, and therefore this approach requires producing samples of those variables. Development and calibration of these models imply a greater demand of time compared to the purely statistical approach to synthetic flows. This approach requires consideration of the main uses in the basins: agriculture and hydroelectricity. Moreover a geostatistical analysis of the area is analyzed to generate a map that identifies the relationship between the points where the hydrological information is generated and other points of interest within the power system. Consideration of homogeneous zones involves a decrease in the effort required for generation of information compared with hydrological modeling of every point of interest. It is important to emphasize that future scenarios are derived through a probabilistic approach that incorporates the features of the hydrological year type (dry, normal or wet), covering the different possibilities in terms of availability of water resources. We present the results for Maule basin in Chile's Central Interconnected System (SIC).
VLBI-derived troposphere parameters during CONT08
NASA Astrophysics Data System (ADS)
Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.
2011-07-01
Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two extensions of the stochastic model is recommended.
Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; ...
2018-04-12
Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less
Guidance and Control strategies for aerospace vehicles
NASA Technical Reports Server (NTRS)
Hibey, J. L.; Naidu, D. S.; Charalambous, C. D.
1989-01-01
A neighboring optimal guidance scheme was devised for a nonlinear dynamic system with stochastic inputs and perfect measurements as applicable to fuel optimal control of an aeroassisted orbital transfer vehicle. For the deterministic nonlinear dynamic system describing the atmospheric maneuver, a nominal trajectory was determined. Then, a neighboring, optimal guidance scheme was obtained for open loop and closed loop control configurations. Taking modelling uncertainties into account, a linear, stochastic, neighboring optimal guidance scheme was devised. Finally, the optimal trajectory was approximated as the sum of the deterministic nominal trajectory and the stochastic neighboring optimal solution. Numerical results are presented for a typical vehicle. A fuel-optimal control problem in aeroassisted noncoplanar orbital transfer is also addressed. The equations of motion for the atmospheric maneuver are nonlinear and the optimal (nominal) trajectory and control are obtained. In order to follow the nominal trajectory under actual conditions, a neighboring optimum guidance scheme is designed using linear quadratic regulator theory for onboard real-time implementation. One of the state variables is used as the independent variable in reference to the time. The weighting matrices in the performance index are chosen by a combination of a heuristic method and an optimal modal approach. The necessary feedback control law is obtained in order to minimize the deviations from the nominal conditions.
Daleo, Pedro; Alberti, Juan; Jumpponen, Ari; Veach, Allison; Ialonardi, Florencia; Iribarne, Oscar; Silliman, Brian
2018-06-01
Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a null model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. Furthermore, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization. © 2018 by the Ecological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleo, Pedro; Alberti, Juan; Jumpponen, Ari
Microbial community assembly is affected by a combination of forces that act simultaneously, but the mechanisms underpinning their relative influences remain elusive. This gap strongly limits our ability to predict human impacts on microbial communities and the processes they regulate. Here, we experimentally demonstrate that increased salinity stress, food web alteration and nutrient loading interact to drive outcomes in salt marsh fungal leaf communities. Both salinity stress and food web alterations drove communities to deterministically diverge, resulting in distinct fungal communities. Increased nutrient loads, nevertheless, partially suppressed the influence of other factors as determinants of fungal assembly. Using a nullmore » model approach, we found that increased nutrient loads enhanced the relative importance of stochastic over deterministic divergent processes; without increased nutrient loads, samples from different treatments showed a relatively (deterministic) divergent community assembly whereas increased nutrient loads drove the system to more stochastic assemblies, suppressing the effect of other treatments. These results demonstrate that common anthropogenic modifications can interact to control fungal community assembly. As a result, our results suggest that when the environmental conditions are spatially heterogeneous (as in our case, caused by specific combinations of experimental treatments), increased stochasticity caused by greater nutrient inputs can reduce the importance of deterministic filters that otherwise caused divergence, thus driving to microbial community homogenization.« less
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Extracting scene feature vectors through modeling, volume 3
NASA Technical Reports Server (NTRS)
Berry, J. K.; Smith, J. A.
1976-01-01
The remote estimation of the leaf area index of winter wheat at Finney County, Kansas was studied. The procedure developed consists of three activities: (1) field measurements; (2) model simulations; and (3) response classifications. The first activity is designed to identify model input parameters and develop a model evaluation data set. A stochastic plant canopy reflectance model is employed to simulate reflectance in the LANDSAT bands as a function of leaf area index for two phenological stages. An atmospheric model is used to translate these surface reflectances into simulated satellite radiance. A divergence classifier determines the relative similarity between model derived spectral responses and those of areas with unknown leaf area index. The unknown areas are assigned the index associated with the closest model response. This research demonstrated that the SRVC canopy reflectance model is appropriate for wheat scenes and that broad categories of leaf area index can be inferred from the procedure developed.
Non-invasive estimation of dissipation from non-equilibrium fluctuations in chemical reactions.
Muy, S; Kundu, A; Lacoste, D
2013-09-28
We show how to extract an estimate of the entropy production from a sufficiently long time series of stationary fluctuations of chemical reactions. This method, which is based on recent work on fluctuation theorems, is direct, non-invasive, does not require any knowledge about the underlying dynamics and is applicable even when only partial information is available. We apply it to simple stochastic models of chemical reactions involving a finite number of states, and for this case, we study how the estimate of dissipation is affected by the degree of coarse-graining present in the input data.
Optimizing Use of Water Management Systems during Changes of Hydrological Conditions
NASA Astrophysics Data System (ADS)
Výleta, Roman; Škrinár, Andrej; Danáčová, Michaela; Valent, Peter
2017-10-01
When designing the water management systems and their components, there is a need of more detail research on hydrological conditions of the river basin, runoff of which creates the main source of water in the reservoir. Over the lifetime of the water management systems the hydrological time series are never repeated in the same form which served as the input for the design of the system components. The design assumes the observed time series to be representative at the time of the system use. However, it is rather unrealistic assumption, because the hydrological past will not be exactly repeated over the design lifetime. When designing the water management systems, the specialists may occasionally face the insufficient or oversized capacity design, possibly wrong specification of the management rules which may lead to their non-optimal use. It is therefore necessary to establish a comprehensive approach to simulate the fluctuations in the interannual runoff (taking into account the current dry and wet periods) in the form of stochastic modelling techniques in water management practice. The paper deals with the methodological procedure of modelling the mean monthly flows using the stochastic Thomas-Fiering model, while modification of this model by Wilson-Hilferty transformation of independent random number has been applied. This transformation usually applies in the event of significant asymmetry in the observed time series. The methodological procedure was applied on the data acquired at the gauging station of Horné Orešany in the Parná Stream. Observed mean monthly flows for the period of 1.11.1980 - 31.10.2012 served as the model input information. After extrapolation the model parameters and Wilson-Hilferty transformation parameters the synthetic time series of mean monthly flows were simulated. Those have been compared with the observed hydrological time series using basic statistical characteristics (e. g. mean, standard deviation and skewness) for testing the quality of the model simulation. The synthetic hydrological series of monthly flows were created having the same statistical properties as the time series observed in the past. The compiled model was able to take into account the diversity of extreme hydrological situations in a form of synthetic series of mean monthly flows, while the occurrence of a set of flows was confirmed, which could and may occur in the future. The results of stochastic modelling in the form of synthetic time series of mean monthly flows, which takes into account the seasonal fluctuations of runoff within the year, could be applicable in engineering hydrology (e. g. for optimum use of the existing water management system that is related to reassessment of economic risks of the system).
NASA Astrophysics Data System (ADS)
Sawicka, K.; Breuer, L.; Houska, T.; Santabarbara Ruiz, I.; Heuvelink, G. B. M.
2016-12-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty propagation analysis and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the `spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo techniques, as well as several uncertainty visualization functions. Here we will demonstrate that the 'spup' package is an effective and easy-to-use tool to be applied even in a very complex study case, and that it can be used in multi-disciplinary research and model-based decision support. As an example, we use the ecological LandscapeDNDC model to analyse propagation of uncertainties associated with spatial variability of the model driving forces such as rainfall, nitrogen deposition and fertilizer inputs. The uncertainty propagation is analysed for the prediction of emissions of N2O and CO2 for a German low mountainous, agriculturally developed catchment. The study tests the effect of spatial correlations on spatially aggregated model outputs, and could serve as an advice for developing best management practices and model improvement strategies.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Wood, Julie; Oravecz, Zita; Vogel, Nina; Benson, Lizbeth; Chow, Sy-Miin; Cole, Pamela; Conroy, David E; Pincus, Aaron L; Ram, Nilam
2017-12-15
Life-span theories of aging suggest improvements and decrements in individuals' ability to regulate affect. Dynamic process models, with intensive longitudinal data, provide new opportunities to articulate specific theories about individual differences in intraindividual dynamics. This paper illustrates a method for operationalizing affect dynamics using a multilevel stochastic differential equation (SDE) model, and examines how those dynamics differ with age and trait-level tendencies to deploy emotion regulation strategies (reappraisal and suppression). Univariate multilevel SDE models, estimated in a Bayesian framework, were fit to 21 days of ecological momentary assessments of affect valence and arousal (average 6.93/day, SD = 1.89) obtained from 150 adults (age 18-89 years)-specifically capturing temporal dynamics of individuals' core affect in terms of attractor point, reactivity to biopsychosocial (BPS) inputs, and attractor strength. Older age was associated with higher arousal attractor point and less BPS-related reactivity. Greater use of reappraisal was associated with lower valence attractor point. Intraindividual variability in regulation strategy use was associated with greater BPS-related reactivity and attractor strength, but in different ways for valence and arousal. The results highlight the utility of SDE models for studying affect dynamics and informing theoretical predictions about how intraindividual dynamics change over the life course. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Network Flow Approach to the Initial Skills Training Scheduling Problem
2007-12-01
include (but are not limited to) queuing theory, stochastic analysis and simulation. After the demand schedule has been estimated, it can be ...software package has already been purchased and is in use by AFPC, AFPC has requested that the new algorithm be programmed in this language as well ...the discussed outputs from those schedules. Required Inputs A single input file details the students to be scheduled as well as the courses
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.
Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258
Modeling of transport phenomena in tokamak plasmas with neural networks
Meneghini, Orso; Luna, Christopher J.; Smith, Sterling P.; ...
2014-06-23
A new transport model that uses neural networks (NNs) to yield electron and ion heat ux pro les has been developed. Given a set of local dimensionless plasma parameters similar to the ones that the highest delity models use, the NN model is able to efficiently and accurately predict the ion and electron heat transport pro les. As a benchmark, a NN was built, trained, and tested on data from the 2012 and 2013 DIII-D experimental campaigns. It is found that NN can capture the experimental behavior over the majority of the plasma radius and across a broad range ofmore » plasma regimes. Although each radial location is calculated independently from the others, the heat ux pro les are smooth, suggesting that the solution found by the NN is a smooth function of the local input parameters. This result supports the evidence of a well-de ned, non-stochastic relationship between the input parameters and the experimentally measured transport uxes. Finally, the numerical efficiency of this method, requiring only a few CPU-μs per data point, makes it ideal for scenario development simulations and real-time plasma control.« less
Supplier Short Term Load Forecasting Using Support Vector Regression and Exogenous Input
NASA Astrophysics Data System (ADS)
Matijaš, Marin; Vukićcević, Milan; Krajcar, Slavko
2011-09-01
In power systems, task of load forecasting is important for keeping equilibrium between production and consumption. With liberalization of electricity markets, task of load forecasting changed because each market participant has to forecast their own load. Consumption of end-consumers is stochastic in nature. Due to competition, suppliers are not in a position to transfer their costs to end-consumers; therefore it is essential to keep forecasting error as low as possible. Numerous papers are investigating load forecasting from the perspective of the grid or production planning. We research forecasting models from the perspective of a supplier. In this paper, we investigate different combinations of exogenous input on the simulated supplier loads and show that using points of delivery as a feature for Support Vector Regression leads to lower forecasting error, while adding customer number in different datasets does the opposite.
Onset of η-nuclear binding in a pionless EFT approach
NASA Astrophysics Data System (ADS)
Barnea, N.; Bazak, B.; Friedman, E.; Gal, A.
2017-08-01
ηNNN and ηNNNN bound states are explored in stochastic variational method (SVM) calculations within a pionless effective field theory (EFT) approach at leading order. The theoretical input consists of regulated NN and NNN contact terms, and a regulated energy dependent ηN contact term derived from coupled-channel models of the N* (1535) nucleon resonance. A self consistency procedure is applied to deal with the energy dependence of the ηN subthreshold input, resulting in a weak dependence of the calculated η-nuclear binding energies on the EFT regulator. It is found, in terms of the ηN scattering length aηN, that the onset of binding η 3He requires a minimal value of ReaηN close to 1 fm, yielding then a few MeV η binding in η 4He. The onset of binding η 4He requires a lower value of ReaηN, but exceeding 0.7 fm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawnsley, K.; Swaby, P.
1996-08-01
It is increasingly acknowledged that in order to understand and forecast the behavior of fracture influenced reservoirs we must attempt to reproduce the fracture system geometry and use this as a basis for fluid flow calculation. This article aims to present a recently developed fracture modelling prototype designed specifically for use in hydrocarbon reservoir environments. The prototype {open_quotes}FRAME{close_quotes} (FRActure Modelling Environment) aims to provide a tool which will allow the generation of realistic 3D fracture systems within a reservoir model, constrained to the known geology of the reservoir by both mechanical and statistical considerations, and which can be used asmore » a basis for fluid flow calculation. Two newly developed modelling techniques are used. The first is an interactive tool which allows complex fault surfaces and their associated deformations to be reproduced. The second is a {open_quotes}genetic{close_quotes} model which grows fracture patterns from seeds using conceptual models of fracture development. The user defines the mechanical input and can retrieve all the statistics of the growing fractures to allow comparison to assumed statistical distributions for the reservoir fractures. Input parameters include growth rate, fracture interaction characteristics, orientation maps and density maps. More traditional statistical stochastic fracture models are also incorporated. FRAME is designed to allow the geologist to input hard or soft data including seismically defined surfaces, well fractures, outcrop models, analogue or numerical mechanical models or geological {open_quotes}feeling{close_quotes}. The geologist is not restricted to {open_quotes}a priori{close_quotes} models of fracture patterns that may not correspond to the data.« less
Folguera-Blasco, Núria; Cuyàs, Elisabet; Menéndez, Javier A; Alarcón, Tomás
2018-03-01
Understanding the control of epigenetic regulation is key to explain and modify the aging process. Because histone-modifying enzymes are sensitive to shifts in availability of cofactors (e.g. metabolites), cellular epigenetic states may be tied to changing conditions associated with cofactor variability. The aim of this study is to analyse the relationships between cofactor fluctuations, epigenetic landscapes, and cell state transitions. Using Approximate Bayesian Computation, we generate an ensemble of epigenetic regulation (ER) systems whose heterogeneity reflects variability in cofactor pools used by histone modifiers. The heterogeneity of epigenetic metabolites, which operates as regulator of the kinetic parameters promoting/preventing histone modifications, stochastically drives phenotypic variability. The ensemble of ER configurations reveals the occurrence of distinct epi-states within the ensemble. Whereas resilient states maintain large epigenetic barriers refractory to reprogramming cellular identity, plastic states lower these barriers, and increase the sensitivity to reprogramming. Moreover, fine-tuning of cofactor levels redirects plastic epigenetic states to re-enter epigenetic resilience, and vice versa. Our ensemble model agrees with a model of metabolism-responsive loss of epigenetic resilience as a cellular aging mechanism. Our findings support the notion that cellular aging, and its reversal, might result from stochastic translation of metabolic inputs into resilient/plastic cell states via ER systems.
PROPAGATOR: a synchronous stochastic wildfire propagation model with distributed computation engine
NASA Astrophysics Data System (ADS)
D´Andrea, M.; Fiorucci, P.; Biondi, G.; Negro, D.
2012-04-01
PROPAGATOR is a stochastic model of forest fire spread, useful as a rapid method for fire risk assessment. The model is based on a 2D stochastic cellular automaton. The domain of simulation is discretized using a square regular grid with cell size of 20x20 meters. The model uses high-resolution information such as elevation and type of vegetation on the ground. Input parameters are wind direction, speed and the ignition point of fire. The simulation of fire propagation is done via a stochastic mechanism of propagation between a burning cell and a non-burning cell belonging to its neighbourhood, i.e. the 8 adjacent cells in the rectangular grid. The fire spreads from one cell to its neighbours with a certain base probability, defined using vegetation types of two adjacent cells, and modified by taking into account the slope between them, wind direction and speed. The simulation is synchronous, and takes into account the time needed by the burning fire to cross each cell. Vegetation cover, slope, wind speed and direction affect the fire-propagation speed from cell to cell. The model simulates several mutually independent realizations of the same stochastic fire propagation process. Each of them provides a map of the area burned at each simulation time step. Propagator simulates self-extinction of the fire, and the propagation process continues until at least one cell of the domain is burning in each realization. The output of the model is a series of maps representing the probability of each cell of the domain to be affected by the fire at each time-step: these probabilities are obtained by evaluating the relative frequency of ignition of each cell with respect to the complete set of simulations. Propagator is available as a module in the OWIS (Opera Web Interfaces) system. The model simulation runs on a dedicated server and it is remote controlled from the client program, NAZCA. Ignition points of the simulation can be selected directly in a high-resolution, three-dimensional graphical representation of the Italian territory within NAZCA. The other simulation parameters, namely wind speed and direction, number of simulations, computing grid size and temporal resolution, can be selected from within the program interface. The output of the simulation is showed in real-time during the simulation, and are also available off-line and on the DEWETRA system, a Web GIS-based system for environmental risk assessment, developed according to OGC-INSPIRE standards. The model execution is very fast, providing a full prevision for the scenario in few minutes, and can be useful for real-time active fire management and suppression.
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.
Observers Exploit Stochastic Models of Sensory Change to Help Judge the Passage of Time
Ahrens, Misha B.; Sahani, Maneesh
2011-01-01
Summary Sensory stimulation can systematically bias the perceived passage of time [1–5], but why and how this happens is mysterious. In this report, we provide evidence that such biases may ultimately derive from an innate and adaptive use of stochastically evolving dynamic stimuli to help refine estimates derived from internal timekeeping mechanisms [6–15]. A simplified statistical model based on probabilistic expectations of stimulus change derived from the second-order temporal statistics of the natural environment [16, 17] makes three predictions. First, random noise-like stimuli whose statistics violate natural expectations should induce timing bias. Second, a previously unexplored obverse of this effect is that similar noise stimuli with natural statistics should reduce the variability of timing estimates. Finally, this reduction in variability should scale with the interval being timed, so as to preserve the overall Weber law of interval timing. All three predictions are borne out experimentally. Thus, in the context of our novel theoretical framework, these results suggest that observers routinely rely on sensory input to augment their sense of the passage of time, through a process of Bayesian inference based on expectations of change in the natural environment. PMID:21256018
Stochastic Lanchester Air-to-Air Campaign Model: Model Description and Users Guides
2009-01-01
STOCHASTIC LANCHESTER AIR-TO-AIR CAMPAIGN MODEL MODEL DESCRIPTION AND USERS GUIDES—2009 REPORT PA702T1 Rober t V. Hemm Jr. Dav id A . Lee...LMI © 2009. ALL RIGHTS RESERVED. Stochastic Lanchester Air-to-Air Campaign Model: Model Description and Users Guides—2009 PA702T1/JANUARY...2009 Executive Summary This report documents the latest version of the Stochastic Lanchester Air-to-Air Campaign Model (SLAACM), developed by LMI for
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
Processing in (linear) systems with stochastic input
NASA Astrophysics Data System (ADS)
Nutu, Catalin Silviu; Axinte, Tiberiu
2016-12-01
The paper is providing a different approach to real-world systems, such as micro and macro systems of our real life, where the man has little or no influence on the system, either not knowing the rules of the respective system or not knowing the input of the system, being thus mainly only spectator of the system's output. In such a system, the input of the system and the laws ruling the system could be only "guessed", based on intuition or previous knowledge of the analyzer of the respective system. But, as we will see in the paper, it exists also another, more theoretical and hence scientific way to approach the matter of the real-world systems, and this approach is mostly based on the theory related to Schrödinger's equation and the wave function associated with it and quantum mechanics as well. The main results of the paper are regarding the utilization of the Schrödinger's equation and related theory but also of the Quantum mechanics, in modeling real-life and real-world systems.
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor
2018-02-01
Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.
Li, Yongming; Sui, Shuai; Tong, Shaocheng
2017-02-01
This paper deals with the problem of adaptive fuzzy output feedback control for a class of stochastic nonlinear switched systems. The controlled system in this paper possesses unmeasured states, completely unknown nonlinear system functions, unmodeled dynamics, and arbitrary switchings. A state observer which does not depend on the switching signal is constructed to tackle the unmeasured states. Fuzzy logic systems are employed to identify the completely unknown nonlinear system functions. Based on the common Lyapunov stability theory and stochastic small-gain theorem, a new robust adaptive fuzzy backstepping stabilization control strategy is developed. The stability of the closed-loop system on input-state-practically stable in probability is proved. The simulation results are given to verify the efficiency of the proposed fuzzy adaptive control scheme.
Stochastic modelling of microstructure formation in solidification processes
NASA Astrophysics Data System (ADS)
Nastac, Laurentiu; Stefanescu, Doru M.
1997-07-01
To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'
El-Diasty, Mohammed; Pagiatakis, Spiros
2009-01-01
In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.
Wang, Qi; Xie, Zhiyi; Li, Fangbai
2015-11-01
This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Naseri Kouzehgarani, Asal
2009-12-01
Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.
NASA Astrophysics Data System (ADS)
Chowdhury, Debashish
2013-08-01
A molecular motor is made of either a single macromolecule or a macromolecular complex. Just like their macroscopic counterparts, molecular motors “transduce” input energy into mechanical work. All the nano-motors considered here operate under isothermal conditions far from equilibrium. Moreover, one of the possible mechanisms of energy transduction, called Brownian ratchet, does not even have any macroscopic counterpart. But, molecular motor is not synonymous with Brownian ratchet; a large number of molecular motors execute a noisy power stroke, rather than operating as Brownian ratchet. We review not only the structural design and stochastic kinetics of individual single motors, but also their coordination, cooperation and competition as well as the assembly of multi-module motors in various intracellular kinetic processes. Although all the motors considered here execute mechanical movements, efficiency and power output are not necessarily good measures of performance of some motors. Among the intracellular nano-motors, we consider the porters, sliders and rowers, pistons and hooks, exporters, importers, packers and movers as well as those that also synthesize, manipulate and degrade “macromolecules of life”. We review mostly the quantitative models for the kinetics of these motors. We also describe several of those motor-driven intracellular stochastic processes for which quantitative models are yet to be developed. In part I, we discuss mainly the methodology and the generic models of various important classes of molecular motors. In part II, we review many specific examples emphasizing the unity of the basic mechanisms as well as diversity of operations arising from the differences in their detailed structure and kinetics. Multi-disciplinary research is presented here from the perspective of physicists.
Stochastic effects in a seasonally forced epidemic model
NASA Astrophysics Data System (ADS)
Rozhnova, G.; Nunes, A.
2010-10-01
The interplay of seasonality, the system’s nonlinearities and intrinsic stochasticity, is studied for a seasonally forced susceptible-exposed-infective-recovered stochastic model. The model is explored in the parameter region that corresponds to childhood infectious diseases such as measles. The power spectrum of the stochastic fluctuations around the attractors of the deterministic system that describes the model in the thermodynamic limit is computed analytically and validated by stochastic simulations for large system sizes. Size effects are studied through additional simulations. Other effects such as switching between coexisting attractors induced by stochasticity often mentioned in the literature as playing an important role in the dynamics of childhood infectious diseases are also investigated. The main conclusion is that stochastic amplification, rather than these effects, is the key ingredient to understand the observed incidence patterns.
NASA Astrophysics Data System (ADS)
Palán, Ladislav; Punčochář, Petr
2017-04-01
Looking on the impact of flooding from the World-wide perspective, in last 50 years flooding has caused over 460,000 fatalities and caused serious material damage. Combining economic loss from ten costliest flood events (from the same period) returns a loss (in the present value) exceeding 300bn USD. Locally, in Brazil, flood is the most damaging natural peril with alarming increase of events frequencies as 5 out of the 10 biggest flood losses ever recorded have occurred after 2009. The amount of economic and insured losses particularly caused by various flood types was the key driver of the local probabilistic flood model development. Considering the area of Brazil (being 5th biggest country in the World) and the scattered distribution of insured exposure, a domain covered by the model was limited to the entire state of Sao Paolo and 53 additional regions. The model quantifies losses on approx. 90 % of exposure (for regular property lines) of key insurers. Based on detailed exposure analysis, Impact Forecasting has developed this tool using long term local hydrological data series (Agencia Nacional de Aguas) from riverine gauge stations and digital elevation model (Instituto Brasileiro de Geografia e Estatística). To provide most accurate representation of local hydrological behaviour needed for the nature of probabilistic simulation, a hydrological data processing focused on frequency analyses of seasonal peak flows - done by fitting appropriate extreme value statistical distribution and stochastic event set generation consisting of synthetically derived flood events respecting realistic spatial and frequency patterns visible in entire period of hydrological observation. Data were tested for homogeneity, consistency and for any significant breakpoint occurrence in time series so the entire observation or only its subparts were used for further analysis. The realistic spatial patterns of stochastic events are reproduced through the innovative use of d-vine copula scheme to generate probabilistic flood event set. The derived design flows for selected rivers inside model domain were used as an input for 2-dimensional hydrodynamic inundation modelling techniques (using the tool TUFLOW by BMT WBM) on mesh size 30 x 30 metres. Outputs from inundation modelling and stochastic event set were implemented in the Aon Benfield's platform ELEMENTS developed and managed internally by Impact Forecasting; Aon Benfield internal catastrophe model development center. The model was designed to evaluate potential financial impact caused by fluvial flooding on portfolios of insurance and/or reinsurance companies. The structure of presented model follows typical scheme of financial loss catastrophe model and combines hazard with exposure and vulnerability to produce potential financial loss expressed in the form of loss exceedance probability curve and many other insured perspectives, such as average annual loss, event or quantile loss tables and etc. Model can take financial inputs as well as provide split of results for exact specified location or related higher administrative units: municipalities and 5-digit postal codes.
An autonomous molecular computer for logical control of gene expression.
Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud
2004-05-27
Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems. Recently, simple molecular-scale autonomous programmable computers were demonstrated allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for 'logical' control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug.
Hu, Yuanan; Cheng, Hefa
2016-07-01
Quantification of the contributions from anthropogenic sources to soil heavy metal loadings on regional scales is challenging because of the heterogeneity of soil parent materials and high variability of anthropogenic inputs, especially for the species that are primarily of lithogenic origin. To this end, we developed a novel method for apportioning the contributions of natural and anthropogenic sources by combining sequential extraction and stochastic modeling, and applied it to investigate the heavy metal pollution in the surface soils of the Pearl River Delta (PRD) in southern China. On the average, 45-86% of Zn, Cu, Pb, and Cd were present in the acid soluble, reducible, and oxidizable fractions of the surface soils, while only 12-24% of Ni, Cr, and As were partitioned in these fractions. The anthropogenic contributions to the heavy metals in the non-residual fractions, even the ones dominated by natural sources, could be identified and quantified by conditional inference trees. Combination of sequential extraction, Kriging interpolation, and stochastic modeling reveals that approximately 10, 39, 6.2, 28, 7.1, 15, and 46% of the As, Cd, Cr, Cu, Ni, Pb, and Zn, respectively, in the surface soils of the PRD were contributed by anthropogenic sources. These results were in general agreements with those obtained through subtraction of regional soil metal background from total loadings, and the soil metal inputs through atmospheric deposition as well. In the non-residual fractions of the surface soils, the anthropogenic contributions to As, Cd, Cr, Cu, Ni, Pb, and Zn, were 48, 42, 50, 51, 49, 24, and 70%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
NASA Astrophysics Data System (ADS)
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
NASA Astrophysics Data System (ADS)
Krämer, Stefan; Rohde, Sophia; Schröder, Kai; Belli, Aslan; Maßmann, Stefanie; Schönfeld, Martin; Henkel, Erik; Fuchs, Lothar
2015-04-01
The design of urban drainage systems with numerical simulation models requires long, continuous rainfall time series with high temporal resolution. However, suitable observed time series are rare. As a result, usual design concepts often use uncertain or unsuitable rainfall data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic rainfall data as input for urban drainage modelling are advanced, tested, and compared. Synthetic rainfall time series of three different precipitation model approaches, - one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model-, are provided for three catchments with different sewer system characteristics in different climate regions in Germany: - Hamburg (northern Germany): maritime climate, mean annual rainfall: 770 mm; combined sewer system length: 1.729 km (City center of Hamburg), storm water sewer system length (Hamburg Harburg): 168 km - Brunswick (Lower Saxony, northern Germany): transitional climate from maritime to continental, mean annual rainfall: 618 mm; sewer system length: 278 km, connected impervious area: 379 ha, height difference: 27 m - Friburg in Brisgau (southern Germany): Central European transitional climate, mean annual rainfall: 908 mm; sewer system length: 794 km, connected impervious area: 1 546 ha, height difference 284 m Hydrodynamic models are set up for each catchment to simulate rainfall runoff processes in the sewer systems. Long term event time series are extracted from the - three different synthetic rainfall time series (comprising up to 600 years continuous rainfall) provided for each catchment and - observed gauge rainfall (reference rainfall) according national hydraulic design standards. The synthetic and reference long term event time series are used as rainfall input for the hydrodynamic sewer models. For comparison of the synthetic rainfall time series against the reference rainfall and against each other the number of - surcharged manholes, - surcharges per manhole, - and the average surcharge volume per manhole are applied as hydraulic performance criteria. The results are discussed and assessed to answer the following questions: - Are the synthetic rainfall approaches suitable to generate high resolution rainfall series and do they produce, - in combination with numerical rainfall runoff models - valid results for design of urban drainage systems? - What are the bounds of uncertainty in the runoff results depending on the synthetic rainfall model and on the climate region? The work is carried out within the SYNOPSE project, funded by the German Federal Ministry of Education and Research (BMBF).
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-01
Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the "five Rs" (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider "stem-like cancer cells" (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. In sample calculations with linear quadratic parameters α = 0.3 per Gy, α∕β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are the ones appropriate for individualized treatment protocols, with the smaller values relevant only to protocols for a heterogeneous patient population. On that assumption, boosting is predicted to be highly effective. Front boosting, apart from practical advantages and a possible advantage as regards iatrogenic second cancers, also probably gives a slightly higher TCP than back boosting. If the total number of SLCC at the start of treatment can be measured even roughly, it will provide a highly sensitive way of discriminating between various models and parameter choices. Updated mathematical methods for calculating repopulation allow credible generalizations of earlier results.
Liu, Meng; Wang, Ke
2010-12-07
This is a continuation of our paper [Liu, M., Wang, K., 2010. Persistence and extinction of a stochastic single-species model under regime switching in a polluted environment, J. Theor. Biol. 264, 934-944]. Taking both white noise and colored noise into account, a stochastic single-species model under regime switching in a polluted environment is studied. Sufficient conditions for extinction, stochastic nonpersistence in the mean, stochastic weak persistence and stochastic permanence are established. The threshold between stochastic weak persistence and extinction is obtained. The results show that a different type of noise has a different effect on the survival results. Copyright © 2010 Elsevier Ltd. All rights reserved.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spill, Fabian, E-mail: fspill@bu.edu; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Guerrero, Pilar
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and smallmore » in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.« less
On Using Surrogates with Genetic Programming.
Hildebrandt, Torsten; Branke, Jürgen
2015-01-01
One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.
Memory-induced resonancelike suppression of spike generation in a resonate-and-fire neuron model
NASA Astrophysics Data System (ADS)
Mankin, Romi; Paekivi, Sander
2018-01-01
The behavior of a stochastic resonate-and-fire neuron model based on a reduction of a fractional noise-driven generalized Langevin equation (GLE) with a power-law memory kernel is considered. The effect of temporally correlated random activity of synaptic inputs, which arise from other neurons forming local and distant networks, is modeled as an additive fractional Gaussian noise in the GLE. Using a first-passage-time formulation, in certain system parameter domains exact expressions for the output interspike interval (ISI) density and for the survival probability (the probability that a spike is not generated) are derived and their dependence on input parameters, especially on the memory exponent, is analyzed. In the case of external white noise, it is shown that at intermediate values of the memory exponent the survival probability is significantly enhanced in comparison with the cases of strong and weak memory, which causes a resonancelike suppression of the probability of spike generation as a function of the memory exponent. Moreover, an examination of the dependence of multimodality in the ISI distribution on input parameters shows that there exists a critical memory exponent αc≈0.402 , which marks a dynamical transition in the behavior of the system. That phenomenon is illustrated by a phase diagram describing the emergence of three qualitatively different structures of the ISI distribution. Similarities and differences between the behavior of the model at internal and external noises are also discussed.
NASA Astrophysics Data System (ADS)
Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina
2017-09-01
A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
NASA Astrophysics Data System (ADS)
Neuhausler, R.; Robinson, M.; Bruna, M.
2017-12-01
Over the last 60 years we have seen an increased amount of ecological regime shifts in tropical coastal zones, from coral reefs to macroalgae dominated states, as a result of natural and anthropogenic stresses. However, these shifts are not always immediate- macroalgae are generally present in coral reefs, with their distribution regulated by herbivorous fish. This is especially true in Moorea, French Polynesia, where macroalgae are shown to flourish in spaces that provide refuge from roaming herbivores. While there are currently modeling efforts in projecting ecological regime shifts in Moorea, temporal deterministic models have been utilized, which fail to capture metastability between multiple steady states and can have issues when dealing with very small populations. To address these concerns, we build on these models to account for spatial variations and individual organisms, as well as stochasticity. Our model can project the percent cover of coral, macroalgae, and algae turf as a function of herbivorous grazers, water quality, and coral demographics. Grazers, included as individual fish (particles), evolve according to a kinetic model and interact with neighbouring benthic assemblages, represented as nodes. Water quality and coral demographics are input parameters that can vary over time, allowing our model to be run for temporally changing scenarios and to be adjusted for different reefs. We plan to engage with previous Moorea Reef Resilience Models through a comparative analysis of our models' outcomes and existing Moorea data. Coupling projective models with available data is useful for informing environmental policy and advancing the modeling field.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
Welch, M C; Kwan, P W; Sajeev, A S M
2014-10-01
Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed
2017-03-01
In this paper, we develop a mathematical model for a tuberculosis model with constant recruitment and varying total population size by incorporating stochastic perturbations. By constructing suitable stochastic Lyapunov functions, we establish sufficient conditions for the existence of an ergodic stationary distribution as well as extinction of the disease to the stochastic system.
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Gompertzian stochastic model with delay effect to cervical cancer growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-02-03
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
NASA Astrophysics Data System (ADS)
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
Markets, Herding and Response to External Information
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2015-01-01
We focus on the influence of external sources of information upon financial markets. In particular, we develop a stochastic agent-based market model characterized by a certain herding behavior as well as allowing traders to be influenced by an external dynamic signal of information. This signal can be interpreted as a time-varying advertising, public perception or rumor, in favor or against one of two possible trading behaviors, thus breaking the symmetry of the system and acting as a continuously varying exogenous shock. As an illustration, we use a well-known German Indicator of Economic Sentiment as information input and compare our results with Germany’s leading stock market index, the DAX, in order to calibrate some of the model parameters. We study the conditions for the ensemble of agents to more accurately follow the information input signal. The response of the system to the external information is maximal for an intermediate range of values of a market parameter, suggesting the existence of three different market regimes: amplification, precise assimilation and undervaluation of incoming information. PMID:26204451
Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji
2016-12-01
Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Economo, Michael N.; White, John A.
2012-01-01
Computational studies as well as in vivo and in vitro results have shown that many cortical neurons fire in a highly irregular manner and at low average firing rates. These patterns seem to persist even when highly rhythmic signals are recorded by local field potential electrodes or other methods that quantify the summed behavior of a local population. Models of the 30–80 Hz gamma rhythm in which network oscillations arise through ‘stochastic synchrony’ capture the variability observed in the spike output of single cells while preserving network-level organization. We extend upon these results by constructing model networks constrained by experimental measurements and using them to probe the effect of biophysical parameters on network-level activity. We find in simulations that gamma-frequency oscillations are enabled by a high level of incoherent synaptic conductance input, similar to the barrage of noisy synaptic input that cortical neurons have been shown to receive in vivo. This incoherent synaptic input increases the emergent network frequency by shortening the time scale of the membrane in excitatory neurons and by reducing the temporal separation between excitation and inhibition due to decreased spike latency in inhibitory neurons. These mechanisms are demonstrated in simulations and in vitro current-clamp and dynamic-clamp experiments. Simulation results further indicate that the membrane potential noise amplitude has a large impact on network frequency and that the balance between excitatory and inhibitory currents controls network stability and sensitivity to external inputs. PMID:22275859
Effects of stochastic sodium channels on extracellular excitation of myelinated nerve fibers.
Mino, Hiroyuki; Grill, Warren M
2002-06-01
The effects of the stochastic gating properties of sodium channels on the extracellular excitation properties of mammalian nerve fibers was determined by computer simulation. To reduce computation time, a hybrid multicompartment cable model including five central nodes of Ranvier containing stochastic sodium channels and 16 flanking nodes containing detenninistic membrane dynamics was developed. The excitation properties of the hybrid cable model were comparable with those of a full stochastic cable model including 21 nodes of Ranvier containing stochastic sodium channels, indicating the validity of the hybrid cable model. The hybrid cable model was used to investigate whether or not the excitation properties of extracellularly activated fibers were influenced by the stochastic gating of sodium channels, including spike latencies, strength-duration (SD), current-distance (IX), and recruitment properties. The stochastic properties of the sodium channels in the hybrid cable model had the greatest impact when considering the temporal dynamics of nerve fibers, i.e., a large variability in latencies, while they did not influence the SD, IX, or recruitment properties as compared with those of the conventional deterministic cable model. These findings suggest that inclusion of stochastic nodes is not important for model-based design of stimulus waveforms for activation of motor nerve fibers. However, in cases where temporal fine structure is important, for example in sensory neural prostheses in the auditory and visual systems, the stochastic properties of the sodium channels may play a key role in the design of stimulus waveforms.
Modeling stochasticity and robustness in gene regulatory networks.
Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis
2009-06-15
Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
Analysis of a novel stochastic SIRS epidemic model with two different saturated incidence rates
NASA Astrophysics Data System (ADS)
Chang, Zhengbo; Meng, Xinzhu; Lu, Xiao
2017-04-01
This paper presents a stochastic SIRS epidemic model with two different nonlinear incidence rates and double epidemic asymmetrical hypothesis, and we devote to develop a mathematical method to obtain the threshold of the stochastic epidemic model. We firstly investigate the boundness and extinction of the stochastic system. Furthermore, we use Ito's formula, the comparison theorem and some new inequalities techniques of stochastic differential systems to discuss persistence in mean of two diseases on three cases. The results indicate that stochastic fluctuations can suppress the disease outbreak. Finally, numerical simulations about different noise disturbance coefficients are carried out to illustrate the obtained theoretical results.
Stochastic memory: Memory enhancement due to noise
NASA Astrophysics Data System (ADS)
Stotland, Alexander; di Ventra, Massimiliano
2012-01-01
There are certain classes of resistors, capacitors, and inductors that, when subject to a periodic input of appropriate frequency, develop hysteresis loops in their characteristic response. Here we show that the hysteresis of such memory elements can also be induced by white noise of appropriate intensity even at very low frequencies of the external driving field. We illustrate this phenomenon using a physical model of memory resistor realized by TiO2 thin films sandwiched between metallic electrodes and discuss under which conditions this effect can be observed experimentally. We also discuss its implications on existing memory systems described in the literature and the role of colored noise.
Augustin, Moritz; Ladenbauer, Josef; Baumann, Fabian; Obermayer, Klaus
2017-06-01
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.
Baumann, Fabian; Obermayer, Klaus
2017-01-01
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models. PMID:28644841
Internal additive noise effects in stochastic resonance using organic field effect transistor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Yoshiharu; Asakawa, Naoki; Matsubara, Kiyohiko
Stochastic resonance phenomenon was observed in organic field effect transistor using poly(3-hexylthiophene), which enhances performance of signal transmission with application of noise. The enhancement of correlation coefficient between the input and output signals was low, and the variation of correlation coefficient was not remarkable with respect to the intensity of external noise, which was due to the existence of internal additive noise following the nonlinear threshold response. In other words, internal additive noise plays a positive role on the capability of approximately constant signal transmission regardless of noise intensity, which can be said “homeostatic” behavior or “noise robustness” against externalmore » noise. Furthermore, internal additive noise causes emergence of the stochastic resonance effect even on the threshold unit without internal additive noise on which the correlation coefficient usually decreases monotonically.« less
Sliding mode control-based linear functional observers for discrete-time stochastic systems
NASA Astrophysics Data System (ADS)
Singh, Satnesh; Janardhanan, Sivaramakrishnan
2017-11-01
Sliding mode control (SMC) is one of the most popular techniques to stabilise linear discrete-time stochastic systems. However, application of SMC becomes difficult when the system states are not available for feedback. This paper presents a new approach to design a SMC-based functional observer for discrete-time stochastic systems. The functional observer is based on the Kronecker product approach. Existence conditions and stability analysis of the proposed observer are given. The control input is estimated by a novel linear functional observer. This approach leads to a non-switching type of control, thereby eliminating the fundamental cause of chatter. Furthermore, the functional observer is designed in such a way that the effect of process and measurement noise is minimised. Simulation example is given to illustrate and validate the proposed design method.
The alpha-motoneuron pool as transmitter of rhythmicities in cortical motor drive.
Stegeman, Dick F; van de Ven, Wendy J M; van Elswijk, Gijs A; Oostenveld, Robert; Kleine, Bert U
2010-10-01
Investigate the effectiveness and frequency dependence of central drive transmission via the alpha-motoneuron pool to the muscle. We describe a model for the simulation of alpha-motoneuron firing and the EMG signal as response to central drive input. The transfer in the frequency domain is investigated. Coherence between stochastical central input and EMG is also evaluated. The transmission of central rhythmicities to the EMG signal relates to the spectral content of the latter. Coherence between central input to the alpha-motoneuron pool and the EMG signal is significant whereby the coupling strength hardly depends on the frequency in a range from 1 to 100 Hz. Common central input to pairs of alpha-motoneurons strongly increases the coherence levels. The often-used rectification of the EMG signal introduces a clear frequency dependence. Oscillatory phenomena are strongly transmitted via the alpha-motoneuron pool. The motoneuron firing frequencies do play a role in the transmission gain, but do not influence the coherence levels. Rectification of the EMG signal enhances the transmission gain, but lowers coherence and introduces a strong frequency dependency. We think that it should be avoided. Our findings show that rhythmicities are translated into alpha-motoneuron activity without strong non-linearities. Copyright 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Improving Project Management with Simulation and Completion Distribution Functions
NASA Technical Reports Server (NTRS)
Cates, Grant R.
2004-01-01
Despite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. A major culprit in late projects is uncertainty, which most, if not all, projects are inherently subject to. This uncertainty resides in the estimates for activity durations, the occurrence of unplanned and unforeseen events, and the availability of critical resources. In response to this problem, this research developed a comprehensive simulation based methodology for conducting quantitative project completion time risk analysis. It is called the Project Assessment by Simulation Technique (PAST). This new tool enables project stakeholders to visualize uncertainty or risk, i.e. the likelihood of their project completing late and the magnitude of the lateness, by providing them with a completion time distribution function of their projects. Discrete event simulation is used within PAST to determine the completion distribution function for the project of interest. The simulation is populated with both deterministic and stochastic elements. The deterministic inputs include planned project activities, precedence requirements, and resource requirements. The stochastic inputs include activity duration growth distributions, probabilities for events that can impact the project, and other dynamic constraints that may be placed upon project activities and milestones. These stochastic inputs are based upon past data from similar projects. The time for an entity to complete the simulation network, subject to both the deterministic and stochastic factors, represents the time to complete the project. Repeating the simulation hundreds or thousands of times allows one to create the project completion distribution function. The Project Assessment by Simulation Technique was demonstrated to be effective for the on-going NASA project to assemble the International Space Station. Approximately $500 million per month is being spent on this project, which is scheduled to complete by 2010. NASA project stakeholders participated in determining and managing completion distribution functions produced from PAST. The first result was that project stakeholders improved project completion risk awareness. Secondly, using PAST, mitigation options were analyzed to improve project completion performance and reduce total project cost.
Precursor processes of human self-initiated action.
Khalighinejad, Nima; Schurger, Aaron; Desantis, Andrea; Zmigrod, Leor; Haggard, Patrick
2018-01-15
A gradual buildup of electrical potential over motor areas precedes self-initiated movements. Recently, such "readiness potentials" (RPs) were attributed to stochastic fluctuations in neural activity. We developed a new experimental paradigm that operationalized self-initiated actions as endogenous 'skip' responses while waiting for target stimuli in a perceptual decision task. We compared these to a block of trials where participants could not choose when to skip, but were instead instructed to skip. Frequency and timing of motor action were therefore balanced across blocks, so that conditions differed only in how the timing of skip decisions was generated. We reasoned that across-trial variability of EEG could carry as much information about the source of skip decisions as the mean RP. EEG variability decreased more markedly prior to self-initiated compared to externally-triggered skip actions. This convergence suggests a consistent preparatory process prior to self-initiated action. A leaky stochastic accumulator model could reproduce this convergence given the additional assumption of a systematic decrease in input noise prior to self-initiated actions. Our results may provide a novel neurophysiological perspective on the topical debate regarding whether self-initiated actions arise from a deterministic neurocognitive process, or from neural stochasticity. We suggest that the key precursor of self-initiated action may manifest as a reduction in neural noise. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Assessing predictability of a hydrological stochastic-dynamical system
NASA Astrophysics Data System (ADS)
Gelfan, Alexander
2014-05-01
The water cycle includes the processes with different memory that creates potential for predictability of hydrological system based on separating its long and short memory components and conditioning long-term prediction on slower evolving components (similar to approaches in climate prediction). In the face of the Panta Rhei IAHS Decade questions, it is important to find a conceptual approach to classify hydrological system components with respect to their predictability, define predictable/unpredictable patterns, extend lead-time and improve reliability of hydrological predictions based on the predictable patterns. Representation of hydrological systems as the dynamical systems subjected to the effect of noise (stochastic-dynamical systems) provides possible tool for such conceptualization. A method has been proposed for assessing predictability of hydrological system caused by its sensitivity to both initial and boundary conditions. The predictability is defined through a procedure of convergence of pre-assigned probabilistic measure (e.g. variance) of the system state to stable value. The time interval of the convergence, that is the time interval during which the system losses memory about its initial state, defines limit of the system predictability. The proposed method was applied to assess predictability of soil moisture dynamics in the Nizhnedevitskaya experimental station (51.516N; 38.383E) located in the agricultural zone of the central European Russia. A stochastic-dynamical model combining a deterministic one-dimensional model of hydrothermal regime of soil with a stochastic model of meteorological inputs was developed. The deterministic model describes processes of coupled heat and moisture transfer through unfrozen/frozen soil and accounts for the influence of phase changes on water flow. The stochastic model produces time series of daily meteorological variables (precipitation, air temperature and humidity), whose statistical properties are similar to those of the corresponding series of the actual data measured at the station. Beginning from the initial conditions and being forced by Monte-Carlo generated synthetic meteorological series, the model simulated diverging trajectories of soil moisture characteristics (water content of soil column, moisture of different soil layers, etc.). Limit of predictability of the specific characteristic was determined through time of stabilization of variance of the characteristic between the trajectories, as they move away from the initial state. Numerical experiments were carried out with the stochastic-dynamical model to analyze sensitivity of the soil moisture predictability assessments to uncertainty in the initial conditions, to determine effects of the soil hydraulic properties and processes of soil freezing on the predictability. It was found, particularly, that soil water content predictability is sensitive to errors in the initial conditions and strongly depends on the hydraulic properties of soil under both unfrozen and frozen conditions. Even if the initial conditions are "well-established", the assessed predictability of water content of unfrozen soil does not exceed 30-40 days, while for frozen conditions it may be as long as 3-4 months. The latter creates opportunity for utilizing the autumn water content of soil as the predictor for spring snowmelt runoff in the region under consideration.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Green roofs'retention performances in different climates
NASA Astrophysics Data System (ADS)
Viola, Francesco; Hellies, Matteo; Deidda, Roberto
2017-04-01
The ongoing process of global urbanization contributes to increasing stormwater runoff from impervious surfaces, threatening also water quality. Green roofs have been proved to be an innovative stormwater management tool to partially restore natural state, enhancing interception, infiltration and evapotranspiration fluxes. The amount of water that is retained within green roofs depends mainly on both soil properties and climate. The evaluation of the retained water is not trivial since it depends on the stochastic soil moisture dynamics. The aim of this work is to explore performances of green roofs, in terms of water retention, as a function of their depth considering different climate regimes. The role of climate in driving water retention has been mainly represented by rainfall and potential evapotranspiration dynamics, which are simulated by a simple conceptual weather generator at daily time scale. The model is able to describe seasonal (in-phase and counter-phase) and stationary behaviors of climatic forcings. Model parameters have been estimated on more than 20,000 historical time series retrieved worldwide. Exemplifying cases are discussed for five different climate scenarios, changing the amplitude and/or the phase of daily mean rainfall and evapotranspiration forcings. The first scenario represents stationary climates, in two other cases the daily mean rainfall or the potential evapotranspiration evolve sinusoidally. In the latter two cases, we simulated the in-phase or in counter-phase conditions. Stochastic forcings have been then used as an input to a simple conceptual hydrological model which simulate soil moisture dynamics, evapotranspiration fluxes, runoff and leakage from soil pack at daily time scale. For several combinations of annual rainfall and potential evapotranspiration, the analysis allowed assessing green roofs' retaining capabilities, at annual time scale. Provided abacus allows a first approximation of possible hydrological benefits deriving from the implementation of intensive or extensive green roofs in different world areas, i.e. less input to sewer systems.
NASA Astrophysics Data System (ADS)
Kahraman, Gokalp
We examine the performance of optical communication systems using erbium-doped fiber amplifiers (OFAs) and avalanche photodiodes (APDs) including nonlinear and transient effects in the former and transient effects in the latter. Transient effects become important as these amplifiers are operated at very high data rates. Nonlinear effects are important for high gain amplifiers. In most studies of noise in these devices, the temporal and nonlinear effects have been ignored. We present a quantum theory of noise in OFAs including the saturation of the atomic population inversion and the pump depletion. We study the quantum-statistical properties of pulse amplification. The generating function of the output photon number distribution (PND) is determined as a function of time during the course of the pulse with an arbitrary input PND assumed. Under stationary conditions, we determine the Kolmogorov equation obeyed by the PND. The PND at the output is determined for arbitrary input distributions. The effect of the counting time and the filter bandwidth used by the detection circuit is determined. We determine the gain, the noise figure, and the sensitivity of receivers using OFAs as preamplifiers, including the effect of backward amplified spontaneous emission (ASE). Backward ASE degrades the noise figure and the sensitivity by depleting the population inversion at the input side of the fiber and thus increasing the noise during signal amplification. We show that the sensitivity improves with the bit rate at low rates but degrades at high rates. We provide a stochastic model that describes the time dynamics in a double-carrier multiplication (DCM) APD. A discrete stochastic model for the electron/hole motion and multiplication is defined on a spatio-temporal lattice and used to derive recursive equations for the mean, the variance, and the autocorrelation of the impulse response as functions of time. The power spectral density of the photocurrent produced in response to a Poisson-distributed stream of photons of uniform rate is evaluated. A method is also developed for solving the coupled transport equations that describe the electron and hole currents in a DCM-APD of arbitrary structure.
2012-05-01
noise (AGN) [1] and [11]. We focus on threshold communication systems due to the underwater environment, noncoherent communication techniques are...the threshold level. In the context of the underwater communications, where noncoherent communication techniques are affected both by noise and
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Dynamics of a stochastic multi-strain SIS epidemic model driven by Lévy noise
NASA Astrophysics Data System (ADS)
Chen, Can; Kang, Yanmei
2017-01-01
A stochastic multi-strain SIS epidemic model is formulated by introducing Lévy noise into the disease transmission rate of each strain. First, we prove that the stochastic model admits a unique global positive solution, and, by the comparison theorem, we show that the solution remains within a positively invariant set almost surely. Next we investigate stochastic stability of the disease-free equilibrium, including stability in probability and pth moment asymptotic stability. Then sufficient conditions for persistence in the mean of the disease are established. Finally, based on an Euler scheme for Lévy-driven stochastic differential equations, numerical simulations for a stochastic two-strain model are carried out to verify the theoretical results. Moreover, numerical comparison results of the stochastic two-strain model and the deterministic version are also given. Lévy noise can cause the two strains to become extinct almost surely, even though there is a dominant strain that persists in the deterministic model. It can be concluded that the introduction of Lévy noise reduces the disease extinction threshold, which indicates that Lévy noise may suppress the disease outbreak.
Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process.
Jahn, Patrick; Berg, Rune W; Hounsgaard, Jørn; Ditlevsen, Susanne
2011-11-01
Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein-Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein-Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential.
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
Stochastic dynamics of melt ponds and sea ice-albedo climate feedback
NASA Astrophysics Data System (ADS)
Sudakov, Ivan
Evolution of melt ponds on the Arctic sea surface is a complicated stochastic process. We suggest a low-order model with ice-albedo feedback which describes stochastic dynamics of melt ponds geometrical characteristics. The model is a stochastic dynamical system model of energy balance in the climate system. We describe the equilibria in this model. We conclude the transition in fractal dimension of melt ponds affects the shape of the sea ice albedo curve.
Effects of Stochastic Traffic Flow Model on Expected System Performance
2012-12-01
NSWC-PCD has made considerable improvements to their pedestrian flow modeling . In addition to the linear paths, the 2011 version now includes...using stochastic paths. 2.2 Linear Paths vs. Stochastic Paths 2.2.1 Linear Paths and Direct Maximum Pd Calculation Modeling pedestrian traffic flow...as a stochastic process begins with the linear path model . Let the detec- tion area be R x C voxels. This creates C 2 total linear paths, path(Cs
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering
Havlicek, Martin; Friston, Karl J.; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D.
2011-01-01
This paper presents a new approach to inverting (fitting) models of coupled dynamical systems based on state-of-the-art (cubature) Kalman filtering. Crucially, this inversion furnishes posterior estimates of both the hidden states and parameters of a system, including any unknown exogenous input. Because the underlying generative model is formulated in continuous time (with a discrete observation process) it can be applied to a wide variety of models specified with either ordinary or stochastic differential equations. These are an important class of models that are particularly appropriate for biological time-series, where the underlying system is specified in terms of kinetics or dynamics (i.e., dynamic causal models). We provide comparative evaluations with generalized Bayesian filtering (dynamic expectation maximization) and demonstrate marked improvements in accuracy and computational efficiency. We compare the schemes using a series of difficult (nonlinear) toy examples and conclude with a special focus on hemodynamic models of evoked brain responses in fMRI. Our scheme promises to provide a significant advance in characterizing the functional architectures of distributed neuronal systems, even in the absence of known exogenous (experimental) input; e.g., resting state fMRI studies and spontaneous fluctuations in electrophysiological studies. Importantly, unlike current Bayesian filters (e.g. DEM), our scheme provides estimates of time-varying parameters, which we will exploit in future work on the adaptation and enabling of connections in the brain. PMID:21396454
Iterative LQG Controller Design Through Closed-Loop Identification
NASA Technical Reports Server (NTRS)
Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.
1996-01-01
This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.
Stochastic Petri Net extension of a yeast cell cycle model.
Mura, Ivan; Csikász-Nagy, Attila
2008-10-21
This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.
Stochasticity and determinism in models of hematopoiesis.
Kimmel, Marek
2014-01-01
This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.
Spiking computation and stochastic amplification in a neuron-like semiconductor microstructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samardak, A. S.; Laboratory of Thin Film Technologies, Far Eastern Federal University, Vladivostok 690950; Nogaret, A.
2011-05-15
We have demonstrated the proof of principle of a semiconductor neuron, which has dendrites, axon, and a soma and computes information encoded in electrical pulses in the same way as biological neurons. Electrical impulses applied to dendrites diffuse along microwires to the soma. The soma is the active part of the neuron, which regenerates input pulses above a voltage threshold and transmits them into the axon. Our concept of neuron is a major step forward because its spatial structure controls the timing of pulses, which arrive at the soma. Dendrites and axon act as transmission delay lines, which modify themore » information, coded in the timing of pulses. We have finally shown that noise enhances the detection sensitivity of the neuron by helping the transmission of weak periodic signals. A maximum enhancement of signal transmission was observed at an optimum noise level known as stochastic resonance. The experimental results are in excellent agreement with simulations of the FitzHugh-Nagumo model. Our neuron is therefore extremely well suited to providing feedback on the various mathematical approximations of neurons and building functional networks.« less
Hybrid ODE/SSA methods and the cell cycle model
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, M.; Cao, Y.
2017-07-01
Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
NASA Astrophysics Data System (ADS)
Sinner, K.; Teasley, R. L.
2016-12-01
Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling
Study on individual stochastic model of GNSS observations for precise kinematic applications
NASA Astrophysics Data System (ADS)
Próchniewicz, Dominik; Szpunar, Ryszard
2015-04-01
The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Measuring hospital efficiency--comparing four European countries.
Mateus, Céu; Joaquim, Inês; Nunes, Carla
2015-02-01
Performing international comparisons on efficiency usually has two main drawbacks: the lack of comparability of data from different countries and the appropriateness and adequacy of data selected for efficiency measurement. With inpatient discharges for four countries, some of the problems of data comparability usually found in international comparisons were mitigated. The objectives are to assess and compare hospital efficiency levels within and between countries, using stochastic frontier analysis with both cross-sectional and panel data. Data from English (2005-2008), Portuguese (2002-2009), Spanish (2003-2009) and Slovenian (2005-2009) hospital discharges and characteristics are used. Weighted hospital discharges were considered as outputs while the number of employees, physicians, nurses and beds were selected as inputs of the production function. Stochastic frontier analysis using both cross-sectional and panel data were performed, as well as ordinary least squares (OLS) analysis. The adequacy of the data was assessed with Kolmogorov-Smirnov and Breusch-Pagan/Cook-Weisberg tests. Data available results were redundant to perform efficiency measurements using stochastic frontier analysis with cross-sectional data. The likelihood ratio test reveals that in cross-sectional data stochastic frontier analysis (SFA) is not statistically different from OLS in Portuguese data, while SFA and OLS estimates are statistically different for Spanish, Slovenian and English data. In the panel data, the inefficiency term is statistically different from 0 in the four countries in analysis, though for Portugal it is still close to 0. Panel data are preferred over cross-section analysis because results are more robust. For all countries except Slovenia, beds and employees are relevant inputs for the production process. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Some Stochastic-Duel Models of Combat.
1983-03-01
AD-R127 879 SOME STOCHASTIC- DUEL MODELS OF CONBAT(U) NAVAL - / POSTGRADUATE SCHOOL MONTEREY CA J S CHOE MAR 83 UNCLASSiIED FC1/Ehhh1; F/ 12/ ,iE...SCHOOL Monterey, California DTIC ELECTE :MAY 10 1983 "T !H ES IS SOME STOCHASTIC- DUEL MODELS OF COMBAT by Jum Soo Choe March 1983 Thesis Advisor: J. G...TYPE OF RETORT a PERIOD COVIOCe Master’s Thesis Some Stochastic- Duel Models of Combat March 1983 S. PERFORINGi *no. 44POOi umet 7. AUTHORW.) a
Trichotomous noise controlled signal amplification in a generalized Verhulst model
NASA Astrophysics Data System (ADS)
Mankin, Romi; Soika, Erkki; Lumi, Neeme
2014-10-01
The long-time limit of the probability distribution and statistical moments for a population size are studied by means of a stochastic growth model with generalized Verhulst self-regulation. The effect of variable environment on the carrying capacity of a population is modeled by a multiplicative three-level Markovian noise and by a time periodic deterministic component. Exact expressions for the moments of the population size have been calculated. It is shown that an interplay of a small periodic forcing and colored noise can cause large oscillations of the mean population size. The conditions for the appearance of such a phenomenon are found and illustrated by graphs. Implications of the results on models of symbiotic metapopulations are also discussed. Particularly, it is demonstrated that the effect of noise-generated amplification of an input signal gets more pronounced as the intensity of symbiotic interaction increases.
NASA Astrophysics Data System (ADS)
Drummond, J. D.; Boano, F.; Atwill, E. R.; Li, X.; Harter, T.; Packman, A. I.
2016-12-01
Rivers are a means of rapid and long-distance transmission of pathogenic microorganisms from upstream terrestrial sources. Thus, significant fluxes of pathogen loads from agricultural lands can occur due to transport in surface waters. Pathogens enter streams and rivers in a variety of processes, notably overland flow, shallow groundwater discharge, and direct inputs from host populations such as humans and other vertebrate species. Viruses, bacteria, and parasites can enter a stream and persist in the environment for varying amounts of time. Of particular concern is the protozoal parasite, Cryptosporidium parvum, which can remain infective for weeks to months under cool and moist conditions, with the infectious state (oocysts) largely resistant to chlorination. In order to manage water-borne diseases more effectively we need to better predict how microbes behave in freshwater systems, particularly how they are transported downstream in rivers and in the process interact with the streambed and other solid surfaces. Microbes continuously immobilize and resuspend during downstream transport due to a variety of processes, such as gravitational settling, attachment to in-stream structures such as submerged macrophytes, and hyporheic exchange and filtration within underlying sediments. These various interactions result in a wide range of microbial residence times in the streambed and therefore influence the persistence of pathogenic microbes in the stream environment. We developed a stochastic mobile-immobile model to describe these microbial transport and retention processes in streams and rivers that also accounts for microbial inactivation. We used the model to assess the transport, retention, and inactivation of C. parvum within stream environments, specifically under representative flow conditions of California streams where C. parvum exposure can be at higher risk due to agricultural nonpoint sources. The results demonstrate that the combination of stream reach-scale analysis and multi-scale stochastic modeling improves assessment of C. parvum transport and retention in streams in order to predict downstream exposure to human communities, wildlife, and livestock.
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-01-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-06-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Dung Tuan Nguyen
2012-01-01
Forest harvest scheduling has been modeled using deterministic and stochastic programming models. Past models seldom address explicit spatial forest management concerns under the influence of natural disturbances. In this research study, we employ multistage full recourse stochastic programming models to explore the challenges and advantages of building spatial...
A spatial stochastic programming model for timber and core area management under risk of fires
Yu Wei; Michael Bevers; Dung Nguyen; Erin Belval
2014-01-01
Previous stochastic models in harvest scheduling seldom address explicit spatial management concerns under the influence of natural disturbances. We employ multistage stochastic programming models to explore the challenges and advantages of building spatial optimization models that account for the influences of random stand-replacing fires. Our exploratory test models...
NASA Astrophysics Data System (ADS)
O'Neill, J. J.; Cai, X.-M.; Kinnersley, R.
2016-10-01
The large-eddy simulation (LES) approach has recently exhibited its appealing capability of capturing turbulent processes inside street canyons and the urban boundary layer aloft, and its potential for deriving the bulk parameters adopted in low-cost operational urban dispersion models. However, the thin roof-level shear layer may be under-resolved in most LES set-ups and thus sophisticated subgrid-scale (SGS) parameterisations may be required. In this paper, we consider the important case of pollutant removal from an urban street canyon of unit aspect ratio (i.e. building height equal to street width) with the external flow perpendicular to the street. We show that by employing a stochastic SGS model that explicitly accounts for backscatter (energy transfer from unresolved to resolved scales), the pollutant removal process is better simulated compared with the use of a simpler (fully dissipative) but widely-used SGS model. The backscatter induces additional mixing within the shear layer which acts to increase the rate of pollutant removal from the street canyon, giving better agreement with a recent wind-tunnel experiment. The exchange velocity, an important parameter in many operational models that determines the mass transfer between the urban canopy and the external flow, is predicted to be around 15% larger with the backscatter SGS model; consequently, the steady-state mean pollutant concentration within the street canyon is around 15% lower. A database of exchange velocities for various other urban configurations could be generated and used as improved input for operational street canyon models.
Heitz, Richard P; Schall, Jeffrey D
2013-10-19
The stochastic accumulation framework provides a mechanistic, quantitative account of perceptual decision-making and how task performance changes with experimental manipulations. Importantly, it provides an elegant account of the speed-accuracy trade-off (SAT), which has long been the litmus test for decision models, and also mimics the activity of single neurons in several key respects. Recently, we developed a paradigm whereby macaque monkeys trade speed for accuracy on cue during visual search task. Single-unit activity in frontal eye field (FEF) was not homomorphic with the architecture of models, demonstrating that stochastic accumulators are an incomplete description of neural activity under SAT. This paper summarizes and extends this work, further demonstrating that the SAT leads to extensive, widespread changes in brain activity never before predicted. We will begin by reviewing our recently published work that establishes how spiking activity in FEF accomplishes SAT. Next, we provide two important extensions of this work. First, we report a new chronometric analysis suggesting that increases in perceptual gain with speed stress are evident in FEF synaptic input, implicating afferent sensory-processing sources. Second, we report a new analysis demonstrating selective influence of SAT on frequency coupling between FEF neurons and local field potentials. None of these observations correspond to the mechanics of current accumulator models.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
Modeling Common-Sense Decisions in Artificial Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
A methodology has been conceived for efficient synthesis of dynamical models that simulate common-sense decision- making processes. This methodology is intended to contribute to the design of artificial-intelligence systems that could imitate human common-sense decision making or assist humans in making correct decisions in unanticipated circumstances. This methodology is a product of continuing research on mathematical models of the behaviors of single- and multi-agent systems known in biology, economics, and sociology, ranging from a single-cell organism at one extreme to the whole of human society at the other extreme. Earlier results of this research were reported in several prior NASA Tech Briefs articles, the three most recent and relevant being Characteristics of Dynamics of Intelligent Systems (NPO -21037), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48; Self-Supervised Dynamical Systems (NPO-30634), NASA Tech Briefs, Vol. 27, No. 3 (March 2003), page 72; and Complexity for Survival of Living Systems (NPO- 43302), NASA Tech Briefs, Vol. 33, No. 7 (July 2009), page 62. The methodology involves the concepts reported previously, albeit viewed from a different perspective. One of the main underlying ideas is to extend the application of physical first principles to the behaviors of living systems. Models of motor dynamics are used to simulate the observable behaviors of systems or objects of interest, and models of mental dynamics are used to represent the evolution of the corresponding knowledge bases. For a given system, the knowledge base is modeled in the form of probability distributions and the mental dynamics is represented by models of the evolution of the probability densities or, equivalently, models of flows of information. Autonomy is imparted to the decisionmaking process by feedback from mental to motor dynamics. This feedback replaces unavailable external information by information stored in the internal knowledge base. Representation of the dynamical models in a parameterized form reduces the task of common-sense-based decision making to a solution of the following hetero-associated-memory problem: store a set of m predetermined stochastic processes given by their probability distributions in such a way that when presented with an unexpected change in the form of an input out of the set of M inputs, the coupled motormental dynamics converges to the corresponding one of the m pre-assigned stochastic process, and a sample of this process represents the decision.
Variational principles for stochastic fluid dynamics
Holm, Darryl D.
2015-01-01
This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083
Stochastic simulation of karst conduit networks
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José
2012-01-01
Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when implemented in a hydraulic inverse modeling procedure. Several synthetic examples are given to illustrate the methodology and real conduit network data are used to generate simulated networks that mimic real geometries and topology.
NASA Astrophysics Data System (ADS)
Yu, Junliang; Froning, Dieter; Reimer, Uwe; Lehnert, Werner
2018-06-01
The lattice Boltzmann method is adopted to simulate the three dimensional dynamic process of liquid water breaking through the gas diffusion layer (GDL) in the polymer electrolyte membrane fuel cell. 22 micro-structures of Toray GDL are built based on a stochastic geometry model. It is found that more than one breakthrough locations are formed randomly on the GDL surface. Breakthrough location distance (BLD) are analyzed statistically in two ways. The distribution is evaluated statistically by the Lilliefors test. It is concluded that the BLD can be described by the normal distribution with certain statistic characteristics. Information of the shortest neighbor breakthrough location distance can be the input modeling setups on the cell-scale simulations in the field of fuel cell simulation.
A Reduced Form Model for Ozone Based on Two Decades of ...
A Reduced Form Model (RFM) is a mathematical relationship between the inputs and outputs of an air quality model, permitting estimation of additional modeling without costly new regional-scale simulations. A 21-year Community Multiscale Air Quality (CMAQ) simulation for the continental United States provided the basis for the RFM developed in this study. Predictors included the principal component scores (PCS) of emissions and meteorological variables, while the predictand was the monthly mean of daily maximum 8-hour CMAQ ozone for the ozone season at each model grid. The PCS form an orthogonal basis for RFM inputs. A few PCS incorporate most of the variability of emissions and meteorology, thereby reducing the dimensionality of the source-receptor problem. Stochastic kriging was used to estimate the model. The RFM was used to separate the effects of emissions and meteorology on ozone concentrations. by running the RFM with emissions constant (ozone dependent on meteorology), or constant meteorology (ozone dependent on emissions). Years with ozone-conducive meteorology were identified, and meteorological variables best explaining meteorology-dependent ozone were identified. Meteorology accounted for 19% to 55% of ozone variability in the eastern US, and 39% to 92% in the western US. Temporal trends estimated for original CMAQ ozone data and emission-dependent ozone were mostly negative, but the confidence intervals for emission-dependent ozone are much
DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2017-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.
Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2018-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980
NASA Astrophysics Data System (ADS)
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC) simulation. The MC simulation identifies combinations of the PR and decays that can meet the SRS requirement at each band center frequency. Decomposed input time histories are produced by summing the converged damped sinusoids with the MC simulation of the phase lag distribution.
Selvaraj, P; Sakthivel, R; Kwon, O M
2018-06-07
This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Liu, Meng; Wang, Ke
2010-06-07
A new single-species model disturbed by both white noise and colored noise in a polluted environment is developed and analyzed. Sufficient criteria for extinction, stochastic nonpersistence in the mean, stochastic weak persistence in the mean, stochastic strong persistence in the mean and stochastic permanence of the species are established. The threshold between stochastic weak persistence in the mean and extinction is obtained. The results show that both white and colored environmental noises have sufficient effect to the survival results. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
An autonomous molecular computer for logical control of gene expression
Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud
2013-01-01
Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems1–7. Recently, simple molecular-scale autonomous programmable computers were demonstrated8–15 allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for ‘logical’ control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton12–17; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes18–22 associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug. PMID:15116117
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical Analysis of the LMS and Modified Stochastic Gradient Algorithms
1989-05-14
siloare of the input data and incorporated directly Into recurisive descriptions and/or nonuniform weighted mov- the altorithmn ar, a data-dependent time...houlsotaion- are al.~, ii%ed tito rteod the welitht tranotienitwbhav- These results are a measure of how rapidly the algo- lair . The Mrnuation,; aalggeut
Analysis of novel stochastic switched SILI epidemic models with continuous and impulsive control
NASA Astrophysics Data System (ADS)
Gao, Shujing; Zhong, Deming; Zhang, Yan
2018-04-01
In this paper, we establish two new stochastic switched epidemic models with continuous and impulsive control. The stochastic perturbations are considered for the natural death rate in each equation of the models. Firstly, a stochastic switched SILI model with continuous control schemes is investigated. By using Lyapunov-Razumikhin method, the sufficient conditions for extinction in mean are established. Our result shows that the disease could be die out theoretically if threshold value R is less than one, regardless of whether the disease-free solutions of the corresponding subsystems are stable or unstable. Then, a stochastic switched SILI model with continuous control schemes and pulse vaccination is studied. The threshold value R is derived. The global attractivity of the model is also obtained. At last, numerical simulations are carried out to support our results.
Steeneveld, Wilma; Swinkels, Jantijn; Hogeveen, Henk
2007-11-01
Chronic subclinical mastitis is usually not treated during the lactation. However, some veterinarians regard treatment of some types of subclinical mastitis to be effective. The goal of this research was to develop a stochastic Monte Carlo simulation model to support decisions around treatment of chronic subclinical mastitis caused by Streptococcus uberis. Factors in the model included the probability of cure after treatment, probability of the cow becoming clinically diseased, transmission of infection to other cows, and physiological effects of the infection. Using basic input parameters for Dutch circumstances, the average economic costs per cow of an untreated chronic subclinical mastitis case caused by Str. uberis in a single quarter from day of diagnosis onwards was euro109. With treatment, the average costs were higher (euro120). Thus, for the average cow, treatment was not efficient economically. However, the risk of high costs was much higher when cows with chronic subclinical mastitis were not treated. A sensitivity analysis showed that profitability of treatment of chronic subclinical Str. uberis mastitis depended on farm-specific factors (such as economic value of discarded milk) and cow-specific factors (such as day of diagnosis, duration of infection, amount of transmission to other cows and cure rate). Therefore, herd level protocols are not sufficient and decision support should be cow specific. Given the importance of cow-specific factors, information from the current model could be applied to automatic decision support systems.
Eye-hand coordination during a double-step task: evidence for a common stochastic accumulator
Gopal, Atul
2015-01-01
Many studies of reaching and pointing have shown significant spatial and temporal correlations between eye and hand movements. Nevertheless, it remains unclear whether these correlations are incidental, arising from common inputs (independent model); whether these correlations represent an interaction between otherwise independent eye and hand systems (interactive model); or whether these correlations arise from a single dedicated eye-hand system (common command model). Subjects were instructed to redirect gaze and pointing movements in a double-step task in an attempt to decouple eye-hand movements and causally distinguish between the three architectures. We used a drift-diffusion framework in the context of a race model, which has been previously used to explain redirect behavior for eye and hand movements separately, to predict the pattern of eye-hand decoupling. We found that the common command architecture could best explain the observed frequency of different eye and hand response patterns to the target step. A common stochastic accumulator for eye-hand coordination also predicts comparable variances, despite significant difference in the means of the eye and hand reaction time (RT) distributions, which we tested. Consistent with this prediction, we observed that the variances of the eye and hand RTs were similar, despite much larger hand RTs (∼90 ms). Moreover, changes in mean eye RTs, which also increased eye RT variance, produced a similar increase in mean and variance of the associated hand RT. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:26084906
NASA Technical Reports Server (NTRS)
Mulavara, Ajitkumar; Fiedler, Matthew; DeDios,Yiri E.; Galvan, Raquel; Bloomberg, Jacob; Wood, Scott
2011-01-01
Astronauts experience disturbances in sensorimotor function after spaceflight during the initial introduction to a gravitational environment, especially after long-duration missions. Stochastic resonance (SR) is a mechanism by which noise can assist and enhance the response of neural systems to relevant, imperceptible sensory signals. We have previously shown that imperceptible electrical stimulation of the vestibular system enhances balance performance while standing on an unstable surface. The goal of our present study is to develop a countermeasure based on vestibular SR that could improve central interpretation of vestibular input and improve motor task responses to mitigate associated risks.
Stochastic and deterministic models for agricultural production networks.
Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D
2007-07-01
An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
One-Week Module on Stochastic Groundwater Modeling
ERIC Educational Resources Information Center
Mays, David C.
2010-01-01
This article describes a one-week introduction to stochastic groundwater modeling, intended for the end of a first course on groundwater hydrology, or the beginning of a second course on stochastic hydrogeology or groundwater modeling. The motivation for this work is to strengthen groundwater education, which has been identified among the factors…
A Stochastic Tick-Borne Disease Model: Exploring the Probability of Pathogen Persistence.
Maliyoni, Milliward; Chirove, Faraimunashe; Gaff, Holly D; Govinder, Keshlan S
2017-09-01
We formulate and analyse a stochastic epidemic model for the transmission dynamics of a tick-borne disease in a single population using a continuous-time Markov chain approach. The stochastic model is based on an existing deterministic metapopulation tick-borne disease model. We compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in tick-borne disease dynamics. The probability of disease extinction and that of a major outbreak are computed and approximated using the multitype Galton-Watson branching process and numerical simulations, respectively. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that a disease outbreak is more likely if the disease is introduced by infected deer as opposed to infected ticks. These insights demonstrate the importance of host movement in the expansion of tick-borne diseases into new geographic areas.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Cox process representation and inference for stochastic reaction-diffusion processes
NASA Astrophysics Data System (ADS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2016-05-01
Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.
Stochastic growth logistic model with aftereffect for batch fermentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah
2014-06-19
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Stochastic growth logistic model with aftereffect for batch fermentation process
NASA Astrophysics Data System (ADS)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md
2014-06-01
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Tutu, Hiroki
2011-06-01
Stochastic resonance (SR) enhanced by time-delayed feedback control is studied. The system in the absence of control is described by a Langevin equation for a bistable system, and possesses a usual SR response. The control with the feedback loop, the delay time of which equals to one-half of the period (2π/Ω) of the input signal, gives rise to a noise-induced oscillatory switching cycle between two states in the output time series, while its average frequency is just smaller than Ω in a small noise regime. As the noise intensity D approaches an appropriate level, the noise constructively works to adapt the frequency of the switching cycle to Ω, and this changes the dynamics into a state wherein the phase of the output signal is entrained to that of the input signal from its phase slipped state. The behavior is characterized by power loss of the external signal or response function. This paper deals with the response function based on a dichotomic model. A method of delay-coordinate series expansion, which reduces a non-Markovian transition probability flux to a series of memory fluxes on a discrete delay-coordinate system, is proposed. Its primitive implementation suggests that the method can be a potential tool for a systematic analysis of SR phenomenon with delayed feedback loop. We show that a D-dependent behavior of poles of a finite Laplace transform of the response function qualitatively characterizes the structure of the power loss, and we also show analytical results for the correlation function and the power spectral density.
Multi-element stochastic spectral projection for high quantile estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Jordan, E-mail: jordan.ko@mac.com; Garnier, Josselin
2013-06-15
We investigate quantile estimation by multi-element generalized Polynomial Chaos (gPC) metamodel where the exact numerical model is approximated by complementary metamodels in overlapping domains that mimic the model’s exact response. The gPC metamodel is constructed by the non-intrusive stochastic spectral projection approach and function evaluation on the gPC metamodel can be considered as essentially free. Thus, large number of Monte Carlo samples from the metamodel can be used to estimate α-quantile, for moderate values of α. As the gPC metamodel is an expansion about the means of the inputs, its accuracy may worsen away from these mean values where themore » extreme events may occur. By increasing the approximation accuracy of the metamodel, we may eventually improve accuracy of quantile estimation but it is very expensive. A multi-element approach is therefore proposed by combining a global metamodel in the standard normal space with supplementary local metamodels constructed in bounded domains about the design points corresponding to the extreme events. To improve the accuracy and to minimize the sampling cost, sparse-tensor and anisotropic-tensor quadratures are tested in addition to the full-tensor Gauss quadrature in the construction of local metamodels; different bounds of the gPC expansion are also examined. The global and local metamodels are combined in the multi-element gPC (MEgPC) approach and it is shown that MEgPC can be more accurate than Monte Carlo or importance sampling methods for high quantile estimations for input dimensions roughly below N=8, a limit that is very much case- and α-dependent.« less
Akam, Thomas E.; Kullmann, Dimitri M.
2012-01-01
The ‘communication through coherence’ (CTC) hypothesis proposes that selective communication among neural networks is achieved by coherence between firing rate oscillation in a sending region and gain modulation in a receiving region. Although this hypothesis has stimulated extensive work, it remains unclear whether the mechanism can in principle allow reliable and selective information transfer. Here we use a simple mathematical model to investigate how accurately coherent gain modulation can filter a population-coded target signal from task-irrelevant distracting inputs. We show that selective communication can indeed be achieved, although the structure of oscillatory activity in the target and distracting networks must satisfy certain previously unrecognized constraints. Firstly, the target input must be differentiated from distractors by the amplitude, phase or frequency of its oscillatory modulation. When distracting inputs oscillate incoherently in the same frequency band as the target, communication accuracy is severely degraded because of varying overlap between the firing rate oscillations of distracting inputs and the gain modulation in the receiving region. Secondly, the oscillatory modulation of the target input must be strong in order to achieve a high signal-to-noise ratio relative to stochastic spiking of individual neurons. Thus, whilst providing a quantitative demonstration of the power of coherent oscillatory gain modulation to flexibly control information flow, our results identify constraints imposed by the need to avoid interference between signals, and reveal a likely organizing principle for the structure of neural oscillations in the brain. PMID:23144603
Deterministic and stochastic CTMC models from Zika disease transmission
NASA Astrophysics Data System (ADS)
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Hybrid approaches for multiple-species stochastic reaction-diffusion models
NASA Astrophysics Data System (ADS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction-diffusion models.
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen
2015-10-15
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Sato, Tatsuhiko; Furusawa, Yoshiya
2012-10-01
Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.
Closed-loop control of a fragile network: application to seizure-like dynamics of an epilepsy model
Ehrens, Daniel; Sritharan, Duluxan; Sarma, Sridevi V.
2015-01-01
It has recently been proposed that the epileptic cortex is fragile in the sense that seizures manifest through small perturbations in the synaptic connections that render the entire cortical network unstable. Closed-loop therapy could therefore entail detecting when the network goes unstable, and then stimulating with an exogenous current to stabilize the network. In this study, a non-linear stochastic model of a neuronal network was used to simulate both seizure and non-seizure activity. In particular, synaptic weights between neurons were chosen such that the network's fixed point is stable during non-seizure periods, and a subset of these connections (the most fragile) were perturbed to make the same fixed point unstable to model seizure events; and, the model randomly transitions between these two modes. The goal of this study was to measure spike train observations from this epileptic network and then apply a feedback controller that (i) detects when the network goes unstable, and then (ii) applies a state-feedback gain control input to the network to stabilize it. The stability detector is based on a 2-state (stable, unstable) hidden Markov model (HMM) of the network, and detects the transition from the stable mode to the unstable mode from using the firing rate of the most fragile node in the network (which is the output of the HMM). When the unstable mode is detected, a state-feedback gain is applied to generate a control input to the fragile node bringing the network back to the stable mode. Finally, when the network is detected as stable again, the feedback control input is switched off. High performance was achieved for the stability detector, and feedback control suppressed seizures within 2 s after onset. PMID:25784851
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Greis, Tillman; Helmholz, Kathrin; Schöniger, Hans Matthias; Haarstrick, Andreas
2012-06-01
In this study, a 3D urban groundwater model is presented which serves for calculation of multispecies contaminant transport in the subsurface on the regional scale. The total model consists of two submodels, the groundwater flow and reactive transport model, and is validated against field data. The model equations are solved applying finite element method. A sensitivity analysis is carried out to perform parameter identification of flow, transport and reaction processes. Coming from the latter, stochastic variation of flow, transport, and reaction input parameters and Monte Carlo simulation are used in calculating probabilities of pollutant occurrence in the domain. These probabilities could be part of determining future spots of contamination and their measure of damages. Application and validation is exemplarily shown for a contaminated site in Braunschweig (Germany), where a vast plume of chlorinated ethenes pollutes the groundwater. With respect to field application, the methods used for modelling reveal feasible and helpful tools to assess natural attenuation (MNA) and the risk that might be reduced by remediation actions.
Tests of oceanic stochastic parameterisation in a seasonal forecast system.
NASA Astrophysics Data System (ADS)
Cooper, Fenwick; Andrejczuk, Miroslaw; Juricke, Stephan; Zanna, Laure; Palmer, Tim
2015-04-01
Over seasonal time scales, our aim is to compare the relative impact of ocean initial condition and model uncertainty, upon the ocean forecast skill and reliability. Over seasonal timescales we compare four oceanic stochastic parameterisation schemes applied in a 1x1 degree ocean model (NEMO) with a fully coupled T159 atmosphere (ECMWF IFS). The relative impacts upon the ocean of the resulting eddy induced activity, wind forcing and typical initial condition perturbations are quantified. Following the historical success of stochastic parameterisation in the atmosphere, two of the parameterisations tested were multiplicitave in nature: A stochastic variation of the Gent-McWilliams scheme and a stochastic diffusion scheme. We also consider a surface flux parameterisation (similar to that introduced by Williams, 2012), and stochastic perturbation of the equation of state (similar to that introduced by Brankart, 2013). The amplitude of the stochastic term in the Williams (2012) scheme was set to the physically reasonable amplitude considered in that paper. The amplitude of the stochastic term in each of the other schemes was increased to the limits of model stability. As expected, variability was increased. Up to 1 month after initialisation, ensemble spread induced by stochastic parameterisation is greater than that induced by the atmosphere, whilst being smaller than the initial condition perturbations currently used at ECMWF. After 1 month, the wind forcing becomes the dominant source of model ocean variability, even at depth.
Validation of the Poisson Stochastic Radiative Transfer Model
NASA Technical Reports Server (NTRS)
Zhuravleva, Tatiana; Marshak, Alexander
2004-01-01
A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.
Analytical pricing formulas for hybrid variance swaps with regime-switching
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun
2017-11-01
The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.
Risley, John C.; Granato, Gregory E.
2014-01-01
6. An analysis of the use of grab sampling and nonstochastic upstream modeling methods was done to evaluate the potential effects on modeling outcomes. Additional analyses using surrogate water-quality datasets for the upstream basin and highway catchment were provided for six Oregon study sites to illustrate the risk-based information that SELDM will produce. These analyses show that the potential effects of highway runoff on receiving-water quality downstream of the outfall depends on the ratio of drainage areas (dilution), the quality of the receiving water upstream of the highway, and the concentration of the criteria of the constituent of interest. These analyses also show that the probability of exceeding a water-quality criterion may depend on the input statistics used, thus careful selection of representative values is important.
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.
Haas, Jessica R.; Thompson, Matthew P.; Tillery, Anne C.; Scott, Joe H.
2017-01-01
Wildfires can increase the frequency and magnitude of catastrophic debris flows. Integrated, proactive natural hazard assessment would therefore characterize landscapes based on the potential for the occurrence and interactions of wildfires and postwildfire debris flows. This chapter presents a new modeling effort that can quantify the variability surrounding a key input to postwildfire debris-flow modeling, the amount of watershed burned at moderate to high severity, in a prewildfire context. The use of stochastic wildfire simulation captures variability surrounding the timing and location of ignitions, fire weather patterns, and ultimately the spatial patterns of watershed area burned. Model results provide for enhanced estimates of postwildfire debris-flow hazard in a prewildfire context, and multiple hazard metrics are generated to characterize and contrast hazards across watersheds. Results can guide mitigation efforts by allowing planners to identify which factors may be contributing the most to the hazard rankings of watersheds.