Stochastic Turing patterns: analysis of compartment-based approaches.
Cao, Yang; Erban, Radek
2014-12-01
Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.
Jiménez-Hernández, Hugo; González-Barbosa, Jose-Joel; Garcia-Ramírez, Teresa
2010-01-01
This investigation demonstrates an unsupervised approach for modeling traffic flow and detecting abnormal vehicle behaviors at intersections. In the first stage, the approach reveals and records the different states of the system. These states are the result of coding and grouping the historical motion of vehicles as long binary strings. In the second stage, using sequences of the recorded states, a stochastic graph model based on a Markovian approach is built. A behavior is labeled abnormal when current motion pattern cannot be recognized as any state of the system or a particular sequence of states cannot be parsed with the stochastic model. The approach is tested with several sequences of images acquired from a vehicular intersection where the traffic flow and duration used in connection with the traffic lights are continuously changed throughout the day. Finally, the low complexity and the flexibility of the approach make it reliable for use in real time systems. PMID:22163616
Jiménez-Hernández, Hugo; González-Barbosa, Jose-Joel; Garcia-Ramírez, Teresa
2010-01-01
This investigation demonstrates an unsupervised approach for modeling traffic flow and detecting abnormal vehicle behaviors at intersections. In the first stage, the approach reveals and records the different states of the system. These states are the result of coding and grouping the historical motion of vehicles as long binary strings. In the second stage, using sequences of the recorded states, a stochastic graph model based on a Markovian approach is built. A behavior is labeled abnormal when current motion pattern cannot be recognized as any state of the system or a particular sequence of states cannot be parsed with the stochastic model. The approach is tested with several sequences of images acquired from a vehicular intersection where the traffic flow and duration used in connection with the traffic lights are continuously changed throughout the day. Finally, the low complexity and the flexibility of the approach make it reliable for use in real time systems. PMID:22163616
Jiménez-Hernández, Hugo; González-Barbosa, Jose-Joel; Garcia-Ramírez, Teresa
2010-01-01
This investigation demonstrates an unsupervised approach for modeling traffic flow and detecting abnormal vehicle behaviors at intersections. In the first stage, the approach reveals and records the different states of the system. These states are the result of coding and grouping the historical motion of vehicles as long binary strings. In the second stage, using sequences of the recorded states, a stochastic graph model based on a Markovian approach is built. A behavior is labeled abnormal when current motion pattern cannot be recognized as any state of the system or a particular sequence of states cannot be parsed with the stochastic model. The approach is tested with several sequences of images acquired from a vehicular intersection where the traffic flow and duration used in connection with the traffic lights are continuously changed throughout the day. Finally, the low complexity and the flexibility of the approach make it reliable for use in real time systems.
NASA Astrophysics Data System (ADS)
Xu, Mingdong; Wu, Fan; Leung, Henry
2009-09-01
Based on the stochastic delay differential equation (SDDE) modeling of neural networks, we propose an effective signal transmission approach along the neurons in such a network. Utilizing the linear relationship between the delay time and the variance of the SDDE system output, the transmitting side encodes a message as a modulation of the delay time and the receiving end decodes the message by tracking the delay time, which is equivalent to estimating the variance of the received signal. This signal transmission approach turns out to follow the principle of the spread spectrum technique used in wireless and wireline wideband communications but in the analog domain rather than digital. We hope the proposed method might help to explain some activities in biological systems. The idea can further be extended to engineering applications. The error performance of the communication scheme is also evaluated here.
Data-based stochastic subgrid-scale parametrization: an approach using cluster-weighted modelling.
Kwasniok, Frank
2012-03-13
A new approach for data-based stochastic parametrization of unresolved scales and processes in numerical weather and climate prediction models is introduced. The subgrid-scale model is conditional on the state of the resolved scales, consisting of a collection of local models. A clustering algorithm in the space of the resolved variables is combined with statistical modelling of the impact of the unresolved variables. The clusters and the parameters of the associated subgrid models are estimated simultaneously from data. The method is implemented and explored in the framework of the Lorenz '96 model using discrete Markov processes as local statistical models. Performance of the cluster-weighted Markov chain scheme is investigated for long-term simulations as well as ensemble prediction. It clearly outperforms simple parametrization schemes and compares favourably with another recently proposed subgrid modelling scheme also based on conditional Markov chains.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Holistic irrigation water management approach based on stochastic soil water dynamics
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly
Sensitivity of Base-Isolated Systems to Ground Motion Characteristics: A Stochastic Approach
Kaya, Yavuz; Safak, Erdal
2008-07-08
Base isolators dissipate energy through their nonlinear behavior when subjected to earthquake-induced loads. A widely used base isolation system for structures involves installing lead-rubber bearings (LRB) at the foundation level. The force-deformation behavior of LRB isolators can be modeled by a bilinear hysteretic model. This paper investigates the effects of ground motion characteristics on the response of bilinear hysteretic oscillators by using a stochastic approach. Ground shaking is characterized by its power spectral density function (PSDF), which includes corner frequency, seismic moment, moment magnitude, and site effects as its parameters. The PSDF of the oscillator response is calculated by using the equivalent-linearization techniques of random vibration theory for hysteretic nonlinear systems. Knowing the PSDF of the response, we can calculate the mean square and the expected maximum response spectra for a range of natural periods and ductility values. The results show that moment magnitude is a critical factor determining the response. Site effects do not seem to have a significant influence.
Stochastic approach to equilibrium and nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Tomé, Tânia; de Oliveira, Mário J.
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions. PMID:25974471
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
NASA Astrophysics Data System (ADS)
Wang, Y. Y.; Huang, G. H.; Wang, S.; Li, W.; Guan, P. B.
2016-08-01
In this study, a risk-based interactive multi-stage stochastic programming (RIMSP) approach is proposed through incorporating the fractile criterion method and chance-constrained programming within a multi-stage decision-making framework. RIMSP is able to deal with dual uncertainties expressed as random boundary intervals that exist in the objective function and constraints. Moreover, RIMSP is capable of reflecting dynamics of uncertainties, as well as the trade-off between the total net benefit and the associated risk. A water allocation problem is used to illustrate applicability of the proposed methodology. A set of decision alternatives with different combinations of risk levels applied to the objective function and constraints can be generated for planning the water resources allocation system. The results can help decision makers examine potential interactions between risks related to the stochastic objective function and constraints. Furthermore, a number of solutions can be obtained under different water policy scenarios, which are useful for decision makers to formulate an appropriate policy under uncertainty. The performance of RIMSP is analyzed and compared with an inexact multi-stage stochastic programming (IMSP) method. Results of comparison experiment indicate that RIMSP is able to provide more robust water management alternatives with less system risks in comparison with IMSP.
Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos
2015-01-01
Earthquake ground motion recordings are scarce in the central and eastern U.S. (CEUS) for large magnitude events and at close distances. We use two different simulation approaches, a deterministic physics-based model and a stochastic model, to simulate recordings from the 2011 Mineral, Virginia, 5.8 earthquake in the CEUS. We then use the 2001 Bhuj, India, 7.6 earthquake as a tectonic analog for a large CEUS earthquake and modify our simulations to develop models for generation of large magnitude earthquakes in the CEUS. Both models show a good fit to the observations from 0.1 to 10 Hz, and show a faster fall-off with distances beyond 500 km for the acceleration spectra compared to ground motion prediction models (GMPEs) for a 7.6 event.
Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system
NASA Astrophysics Data System (ADS)
Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng
2016-03-01
Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction.
Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system
Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng
2016-01-01
Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction. PMID:26940002
NASA Astrophysics Data System (ADS)
Kloss, S.; Schütze, N.; Walser, S.; Grundmann, J.
2012-04-01
In arid and semi-arid regions where water is scarce, farmers heavily rely on irrigation in order to grow crops and to produce agricultural commodities. The variable and often severely limited water supply thereby poses a serious challenge for farmers to cope with and demand sophisticated irrigation strategies that allow an efficient management of the available water resources. The general aim is to increase water productivity (WP) and one of these strategies to achieve this goal is controlled deficit irrigation (CDI). One way to realize CDI is by defining soil water status specific threshold values (either in soil tension or moisture) at which irrigation cycles are triggered. When utilizing CDI, irrigation control is of utmost importance and yet thresholds are likely chosen by trial and error and thus unreliable. Hence, for CDI to be effective systematic investigations for deriving reliable threshold values that account for different CDI strategies are needed. In this contribution, a method is presented that uses a simulation-based stochastic approach for estimating threshold values with a high reliability. The approach consist of a weather generator offering statistical significance to site-specific climate series, an optimization algorithm that determines optimal threshold values under limiting waters supply, and a crop model for simulating plant growth and water consumption. The study focuses on threshold values of soil tension for different CDI strategies. The advantage of soil-tension-based threshold values over soil-moisture-based lies in their universal and soil type independent applicability. The investigated CDI strategies comprised schedules of constant threshold values, crop development stage dependent threshold values, and different minimum irrigation intervals. For practical reasons, fixed irrigation schedules were tested as well. Additionally, a full irrigation schedule served as reference. The obtained threshold values were then tested in field
Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography
2013-01-01
Background The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. Methods In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young’s modulus are computed simultaneously through an extended Kalman filter (EKF). Results In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young’s modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. Conclusions The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model
A stochastic approach to open quantum systems.
Biele, R; D'Agosta, R
2012-07-11
Stochastic methods are ubiquitous to a variety of fields, ranging from physics to economics and mathematics. In many cases, in the investigation of natural processes, stochasticity arises every time one considers the dynamics of a system in contact with a somewhat bigger system, an environment with which it is considered in thermal equilibrium. Any small fluctuation of the environment has some random effect on the system. In physics, stochastic methods have been applied to the investigation of phase transitions, thermal and electrical noise, thermal relaxation, quantum information, Brownian motion and so on. In this review, we will focus on the so-called stochastic Schrödinger equation. This is useful as a starting point to investigate the dynamics of open quantum systems capable of exchanging energy and momentum with an external environment. We discuss in some detail the general derivation of a stochastic Schrödinger equation and some of its recent applications to spin thermal transport, thermal relaxation, and Bose-Einstein condensation. We thoroughly discuss the advantages of this formalism with respect to the more common approach in terms of the reduced density matrix. The applications discussed here constitute only a few examples of a much wider range of applicability.
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Stochastic Lagrangian Particle Approach to Fractal Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Zhang, Xicheng
2012-04-01
In this article we study the fractal Navier-Stokes equations by using the stochastic Lagrangian particle path approach in Constantin and Iyer (Comm Pure Appl Math LXI:330-345, 2008). More precisely, a stochastic representation for the fractal Navier-Stokes equations is given in terms of stochastic differential equations driven by Lévy processes. Based on this representation, a self-contained proof for the existence of a local unique solution for the fractal Navier-Stokes equation with initial data in {{mathbb W}^{1,p}} is provided, and in the case of two dimensions or large viscosity, the existence of global solutions is also obtained. In order to obtain the global existence in any dimensions for large viscosity, the gradient estimates for Lévy processes with time dependent and discontinuous drifts are proved.
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management.
Optimality of collective choices: a stochastic approach.
Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L
2003-09-01
Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251
Stochastic approach to flat direction during inflation
Kawasaki, Masahiro; Takesako, Tomohiro E-mail: takesako@icrr.u-tokyo.ac.jp
2012-08-01
We revisit the time evolution of a flat and non-flat direction system during inflation. In order to take into account quantum noises in the analysis, we base on stochastic formalism and solve coupled Langevin equations numerically. We focus on a class of models in which tree-level Hubble-induced mass is not generated. Although the non-flat directions can block the growth of the flat direction's variance in principle, the blocking effects are suppressed by the effective masses of the non-flat directions. We find that the fate of the flat direction during inflation is determined by one-loop radiative corrections and non-renormalizable terms as usually considered, if we remove the zero-point fluctuation from the noise terms.
NASA Astrophysics Data System (ADS)
Oprisan, Sorinel Adrian
2001-11-01
There has been increased theoretical and experimental research interest in autonomous mobile robots exhibiting cooperative behaviour. This paper provides consistent quantitative measures of organizational degree of a two-dimensional environment. We proved, by the way of numerical simulations, that the theoretically derived values of the feature are reliable measures of aggregation degree. The slope of the feature's dependence on memory radius leads to an optimization criterion for stochastic functional self-organization. We also described the intellectual heritages that have guided our research, as well as possible future developments.
Symmetries of stochastic differential equations: A geometric approach
NASA Astrophysics Data System (ADS)
De Vecchi, Francesco C.; Morando, Paola; Ugolini, Stefania
2016-06-01
A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Two Different Approaches to Nonzero-Sum Stochastic Differential Games
Rainer, Catherine
2007-06-15
We make the link between two approaches to Nash equilibria for nonzero-sum stochastic differential games: the first one using backward stochastic differential equations and the second one using strategies with delay. We prove that, when both exist, the two notions of Nash equilibria coincide.
Stochastic modelling of evaporation based on copulas
NASA Astrophysics Data System (ADS)
Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko
2015-04-01
Evapotranspiration is an important process in the water cycle that represents a considerable amount of moisture lost through evaporation from the soil surface and transpiration from plants in a watershed. Therefore, an accurate estimate of evapotranspiration rates is necessary, along with precipitation data, for running hydrological models. Often, daily reference evapotranspiration is modelled based on the Penman, Priestley-Taylor or Hargraeves equation. However, each of these models requires extensive input data, such as daily mean temperature, wind speed, relative humidity and solar radiation. Yet, in design studies, such data is unavailable in case stochastically generated time series of precipitation are used to force a hydrologic model. In the latter case, an alternative model approach is needed that allows for generating evapotranspiration data that are consistent with the accompanying precipitation data. This contribution presents such an approach in which the statistical dependence between evapotranspiration, temperature and precipitation is described by three- and four-dimensional vine copulas. Based on a case study of 72 years of evapotranspiration, temperature and precipitation data, observed in Uccle, Belgium, it was found that canonical vine copulas (C-Vines) in which bivariate Frank copulas are employed perform very well in preserving the dependencies between variables. While 4-dimensional C-Vine copulas performed best in simulating time series of evapotranspiration, a 3-dimensional C-Vine copula (relating evapotranspiration, daily precipitation depth and temperature) still allows for modelling evapotranspiration, though with larger error statistics.
Stochastic approach to efficient design of belt conveyor networks
Sevim, H.
1985-07-01
Currently, the design of belt conveyor networks is based either on deterministic production assumptions or on simulation models. In this research project, the stochastic process at the coal face is expressed and formulated by a Semi-Markovian technique, and the subject is used as input in a computerized heuristic design model. The author previously has used a Semi-Markovian process to analyze longwall and room-and-pillar production operations. Results indicated that a coal flow in the section belt of a room-and-pillar operation would be expected only 20% of the time in a steady-state operation mode. Similarly, longwall face operations indicated a 35 to 40% coal flow under steady-state conditions. In the present study, similar data from several production sections are used to compute the probabilities of different quantities of coal flowing at any given time during a shift on the belt in the submain entries. Depending upon the probabilities of coal flows on belts in sections and submain and main entries, the appropriate haulage units such as belt width, motor horsepower, idlers, etc., and belt speed are selected by a computerized model. After the development of this algorithm, now in progress, results of a case study undertaken at an existing coal mine in the Illinois Coal Basin using this stochastic approach will be compared with those obtained using an existing belt haulage system design approach.
NASA Astrophysics Data System (ADS)
Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi
2013-04-01
modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by
Absolute phase image reconstruction: a stochastic nonlinear filtering approach.
Leitão, J N; Figueiredo, M A
1998-01-01
This paper formulates and proposes solutions to the problem of estimating/reconstructing the absolute (not simply modulo-2pi) phase of a complex random field from noisy observations of its real and imaginary parts. This problem is representative of a class of important imaging techniques such as interferometric synthetic aperture radar, optical interferometry, magnetic resonance imaging, and diffraction tomography. We follow a Bayesian approach; then, not only a probabilistic model of the observation mechanism, but also prior knowledge concerning the (phase) image to be reconstructed, are needed. We take as prior a nonsymmetrical half plane autoregressive (NSHP AR) Gauss-Markov random field (GMRF). Based on a reduced order state-space formulation of the (linear) NSHP AR model and on the (nonlinear) observation mechanism, a recursive stochastic nonlinear filter is derived, The corresponding estimates are compared with those obtained by the extended Kalman-Bucy filter, a classical linearizing approach to the same problem. A set of examples illustrate the effectiveness of the proposed approach. PMID:18276299
Stochastic approach to the molecular counting problem in superresolution microscopy
Rollins, Geoffrey C.; Shin, Jae Yen; Bustamante, Carlos; Pressé, Steve
2015-01-01
Superresolution imaging methods—now widely used to characterize biological structures below the diffraction limit—are poised to reveal in quantitative detail the stoichiometry of protein complexes in living cells. In practice, the photophysical properties of the fluorophores used as tags in superresolution methods have posed a severe theoretical challenge toward achieving this goal. Here we develop a stochastic approach to enumerate fluorophores in a diffraction-limited area measured by superresolution microscopy. The method is a generalization of aggregated Markov methods developed in the ion channel literature for studying gating dynamics. We show that the method accurately and precisely enumerates fluorophores in simulated data while simultaneously determining the kinetic rates that govern the stochastic photophysics of the fluorophores to improve the prediction’s accuracy. This stochastic method overcomes several critical limitations of temporal thresholding methods. PMID:25535361
A stochastic physical system approach to modeling river water quality
NASA Astrophysics Data System (ADS)
Curi, W. F.; Unny, T. E.; Kay, J. J.
1995-06-01
In this paper, concepts of network thermodynamics are applied to a river water quality model, which is based on Streeter-Phelps equations, to identify the corresponding physical components and their topology. Then, the randomness in the parameters, input coefficients and initial conditions are modeled by Gaussian white noises. From the stochastic components of the physical system description of problem and concepts of physical system theory, a set of stochastic differential equations can be automatically generated in a computer and the recent developments on the automatic formulation of the moment equations based on Ito calculus can be used. This procedure is illustrated through the solution of an example of stochastic river water quality problem and it is also shown how other related problems with different configurations can be automatically solved in a computer using just one software.
Computational approaches to stochastic systems in physics and biology
NASA Astrophysics Data System (ADS)
Jeraldo Maldonado, Patricio Rodrigo
In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the
NASA Astrophysics Data System (ADS)
Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi
2013-04-01
modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by
Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach
Ma, Xiao; Dong, Jin; Djouadi, Seddik M; Nutaro, James J; Kuruganti, Teja
2015-01-01
The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.
Implications of a stochastic approach to air-quality regulations
Witten, A.J.; Kornegay, F.C.; Hunsaker, D.B. Jr.; Long, E.C. Jr.; Sharp, R.D.; Walsh, P.J.; Zeighami, E.A.; Gordon, J.S.; Lin, W.L.
1982-09-01
This study explores the viability of a stochastic approach to air quality regulations. The stochastic approach considered here is one which incorporates the variability which exists in sulfur dioxide (SO/sub 2/) emissions from coal-fired power plants. Emission variability arises from a combination of many factors including variability in the composition of as-received coal such as sulfur content, moisture content, ash content, and heating value, as well as variability which is introduced in power plant operations. The stochastic approach as conceived in this study addresses variability by taking the SO/sub 2/ emission rate to be a random variable with specified statistics. Given the statistical description of the emission rate and known meteorological conditions, it is possible to predict the probability of a facility exceeding a specified emission limit or violating an established air quality standard. This study also investigates the implications of accounting for emissions variability by allowing compliance to be interpreted as an allowable probability of occurrence of given events. For example, compliance with an emission limit could be defined as the probability of exceeding a specified emission value, such as 1.2 lbs SO/sub 2//MMBtu, being less than 1%. In contrast, compliance is currently taken to mean that this limit shall never be exceeded, i.e., no exceedance probability is allowed. The focus of this study is on the economic benefits offered to facilities through the greater flexibility of the stochastic approach as compared with possible changes in air quality and health effects which could result.
A probabilistic graphical model based stochastic input model construction
Wan, Jiang; Zabaras, Nicholas
2014-09-01
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.
A Spatial Clustering Approach for Stochastic Fracture Network Modelling
NASA Astrophysics Data System (ADS)
Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.
2014-07-01
Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach
Stochastic model updating utilizing Bayesian approach and Gaussian process model
NASA Astrophysics Data System (ADS)
Wan, Hua-Ping; Ren, Wei-Xin
2016-03-01
Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.
Webster, Clayton G; Gunzburger, Max D
2013-01-01
We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical
Stochastic physical ecohydrologic-based model for estimating irrigation requirement
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Climate uncertainty affects both natural and managed hydrological systems. Therefore, methods which could take this kind of uncertainty into account are of primal importance for management of ecosystems, especially agricultural ecosystems. One of the famous problems in these ecosystems is crop water requirement estimation under climatic uncertainty. Both deterministic physically-based methods and stochastic time series modeling have been utilized in the literature. Like other fields of hydroclimatic sciences, there is a vast area in irrigation process modeling for developing approaches integrating physics of the process and statistics aspects. This study is about deriving closed-form expressions for probability density function (p.d.f.) of irrigation water requirement using a stochastic physically-based model, which considers important aspects of plant, soil, atmosphere and irrigation technique and policy in a coherent framework. An ecohydrologic stochastic model, building upon the stochastic differential equation of soil moisture dynamics at root zone, is employed as a basis for deriving the expressions considering temporal stochasticity of rainfall. Due to distinguished nature of stochastic processes of micro and traditional irrigation applications, two different methodologies have been used. Micro-irrigation application has been modeled through dichotomic process. Chapman-Kolomogrov equation of time integral of the dichotomic process for transient condition has been solved to derive analytical expressions for probability density function of seasonal irrigation requirement. For traditional irrigation, irrigation application during growing season has been modeled using a marked point process. Using the renewal theory, probability mass function of seasonal irrigation requirement, which is a discrete-value quantity, has been analytically derived. The methodology deals with estimation of statistical properties of the total water requirement in a growing season that
Barkhausen discontinuities and hysteresis of ferromagnetics: New stochastic approach
Vengrinovich, Valeriy
2014-02-18
The magnetization of ferromagnetic material is considered as periodically inhomogeneous Markov process. The theory assumes both statistically independent and correlated Barkhausen discontinuities. The model, based on the chain evolution-type process theory, assumes that the domain structure of a ferromagnet passes successively the steps of: linear growing, exponential acceleration and domains annihilation to zero density at magnetic saturation. The solution of stochastic differential Kolmogorov equation enables the hysteresis loop calculus.
NASA Astrophysics Data System (ADS)
Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris
2016-04-01
Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.
Nonlinear Aeroelastic Analysis of UAVs: Deterministic and Stochastic Approaches
NASA Astrophysics Data System (ADS)
Sukut, Thomas Woodrow
Aeroelastic aspects of unmanned aerial vehicles (UAVs) is analyzed by treatment of a typical section containing geometrical nonlinearities. Equations of motion are derived and numerical integration of these equations subject to quasi-steady aerodynamic forcing is performed. Model properties are tailored to a high-altitude long-endurance unmanned aircraft. Harmonic balance approximation is employed based on the steady-state oscillatory response of the aerodynamic forcing. Comparisons are made between time integration results and harmonic balance approximation. Close agreement between forcing and displacement oscillatory frequencies is found. Amplitude agreement is off by a considerable margin. Additionally, stochastic forcing effects are examined. Turbulent flow velocities generated from the von Karman spectrum are applied to the same nonlinear structural model. Similar qualitative behavior is found between quasi-steady and stochastic forcing models illustrating the importance of considering the non-steady nature of atmospheric turbulence when operating near critical flutter velocity.
NASA Astrophysics Data System (ADS)
Safieddine, Doha; Kachenoura, Amar; Albera, Laurent; Birot, Gwénaël; Karfoul, Ahmad; Pasnicu, Anca; Biraben, Arnaud; Wendling, Fabrice; Senhadji, Lotfi; Merlet, Isabelle
2012-12-01
Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that
Stochastic pumping of heat: approaching the Carnot efficiency.
Segal, Dvira
2008-12-31
Random noise can generate a unidirectional heat current across asymmetric nano-objects in the absence (or against) a temperature gradient. We present a minimal model for a molecular-level stochastic heat pump that may operate arbitrarily close to the Carnot efficiency. The model consists a fluctuating molecular unit coupled to two solids characterized by distinct phonon spectral properties. Heat pumping persists for a broad range of system and bath parameters. Furthermore, by filtering the reservoirs' phonons the pump efficiency can approach the Carnot limit.
A Stochastic Differential Equation Approach To Multiphase Flow In Porous Media
NASA Astrophysics Data System (ADS)
Dean, D.; Russell, T.
2003-12-01
The motivation for using stochastic differential equations in multiphase flow systems stems from our work in developing an upscaling methodology for single phase flow. The long term goals of this project include: I. Extending this work to a nonlinear upscaling methodology II. Developing a macro-scale stochastic theory of multiphase flow and transport that accounts for micro-scale heterogeneities and interfaces. In this talk, we present a stochastic differential equation approach to multiphase flow, a typical example of which is flow in the unsaturated domain. Specifically, a two phase problem is studied which consists of a wetting phase and a non-wetting phase. The approach given results in a nonlinear stochastic differential equation describing the position of the non-wetting phase fluid particle. Our fundamental assumption is that the flow of fluid particles is described by a stochastic process and that the positions of the fluid particles over time are governed by the law of the process. It is this law which we seek to determine. The nonlinearity in the stochastic differential equation arises because both the drift and diffusion coefficients depend on the volumetric fraction of the phase which in turn depends on the position of the fluid particles in the experimental domain. The concept of a fluid particle is central to the development of the model described in this talk. Expressions for both saturation and volumetric fraction are developed using the fluid particle concept. Darcy's law and the continuity equation are then used to derive a Fokker-Planck equation using these expressions. The Ito calculus is then applied to derive a stochastic differential equation for the non-wetting phase. This equation has both drift and diffusion terms which depend on the volumetric fraction of the non-wetting phase. Standard stochastic theories based on the Ito calculus and the Wiener process and the equivalent Fokker-Planck PDE's are typically used to model dispersion
Approaching complexity by stochastic methods: From biological systems to turbulence
NASA Astrophysics Data System (ADS)
Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2011-09-01
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao
2007-01-01
A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen-Loève-based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen-Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two-dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.
Stochastic control approaches for sensor management in search and exploitation
NASA Astrophysics Data System (ADS)
Hitchings, Darin Chester
new lower bound on the performance of adaptive controllers in these scenarios, develop algorithms for computing solutions to this lower bound, and use these algorithms as part of a RH controller for sensor allocation in the presence of moving objects We also consider an adaptive Search problem where sensing actions are continuous and the underlying measurement space is also continuous. We extend our previous hierarchical decomposition approach based on performance bounds to this problem and develop novel implementations of Stochastic Dynamic Programming (SDP) techniques to solve this problem. Our algorithms are nearly two orders of magnitude faster than previously proposed approaches and yield solutions of comparable quality. For supervisory control, we discuss how human operators can work with and augment robotic teams performing these tasks. Our focus is on how tasks are partitioned among teams of robots and how a human operator can make intelligent decisions for task partitioning. We explore these questions through the design of a game that involves robot automata controlled by our algorithms and a human supervisor that partitions tasks based on different levels of support information. This game can be used with human subject experiments to explore the effect of information on quality of supervisory control.
Majorana approach to the stochastic theory of line shapes
NASA Astrophysics Data System (ADS)
Komijani, Yashar; Coleman, Piers
2016-08-01
Motivated by recent Mössbauer experiments on strongly correlated mixed-valence systems, we revisit the Kubo-Anderson stochastic theory of spectral line shapes. Using a Majorana representation for the nuclear spin we demonstrate how to recast the classic line-shape theory in a field-theoretic and diagrammatic language. We show that the leading contribution to the self-energy can reproduce most of the observed line-shape features including splitting and line-shape narrowing, while the vertex and the self-consistency corrections can be systematically included in the calculation. This approach permits us to predict the line shape produced by an arbitrary bulk charge fluctuation spectrum providing a model-independent way to extract the local charge fluctuation spectrum of the surrounding medium. We also derive an inverse formula to extract the charge fluctuation from the measured line shape.
A microprocessor-based multichannel subsensory stochastic resonance electrical stimulator.
Chang, Gwo-Ching
2013-01-01
Stochastic resonance electrical stimulation is a novel intervention which provides potential benefits for improving postural control ability in the elderly, those with diabetic neuropathy, and stroke patients. In this paper, a microprocessor-based subsensory white noise electrical stimulator for the applications of stochastic resonance stimulation is developed. The proposed stimulator provides four independent programmable stimulation channels with constant-current output, possesses linear voltage-to-current relationship, and has two types of stimulation modes, pulse amplitude and width modulation.
A Model of Bone Remodelling Based on Stochastic Resonance
NASA Astrophysics Data System (ADS)
Rusconi, M.; Zaikin, A.; Marwan, N.; Kurths, J.
2008-06-01
One of the most crucial medical challenges for long-term space flights is the prevention of bone loss affecting astronauts and its dramatic consequences on their return to gravitational field. Recently, a new noise-induced phenomenon in bone formation has been reported experimentally [1]. With this contribution we propose a model for this findings based on Stochastic Resonance [2]. Our simulations suggest new countermeasures for bone degeneration during long space fights using the effect of Stochastic Resonance.
Stochastic approach to modelling of near-periodic jumping loads
NASA Astrophysics Data System (ADS)
Racic, V.; Pavic, A.
2010-11-01
A mathematical model has been developed to generate stochastic synthetic vertical force signals induced by a single person jumping. The model is based on a unique database of experimentally measured individual jumping loads which has the most extensive range of possible jumping frequencies. The ability to replicate many of the temporal and spectral features of real jumping loads gives this model a definite advantage over the conventional half-sine models coupled with Fourier series analysis. This includes modelling of the omnipresent lack of symmetry of individual jumping pulses and jump-by-jump variations in amplitudes and timing. The model therefore belongs to a new generation of synthetic narrow band jumping loads which simulate reality better. The proposed mathematical concept for characterisation of near-periodic jumping pulses may be utilised in vibration serviceability assessment of civil engineering assembly structures, such as grandstands, spectator galleries, footbridges and concert or gym floors, to estimate more realistically dynamic structural response due to people jumping.
ENISI SDE: A New Web-Based Tool for Modeling Stochastic Processes.
Mei, Yongguo; Carbo, Adria; Hoops, Stefan; Hontecillas, Raquel; Bassaganya-Riera, Josep
2015-01-01
Modeling and simulations approaches have been widely used in computational biology, mathematics, bioinformatics and engineering to represent complex existing knowledge and to effectively generate novel hypotheses. While deterministic modeling strategies are widely used in computational biology, stochastic modeling techniques are not as popular due to a lack of user-friendly tools. This paper presents ENISI SDE, a novel web-based modeling tool with stochastic differential equations. ENISI SDE provides user-friendly web user interfaces to facilitate adoption by immunologists and computational biologists. This work provides three major contributions: (1) discussion of SDE as a generic approach for stochastic modeling in computational biology; (2) development of ENISI SDE, a web-based user-friendly SDE modeling tool that highly resembles regular ODE-based modeling; (3) applying ENISI SDE modeling tool through a use case for studying stochastic sources of cell heterogeneity in the context of CD4+ T cell differentiation. The CD4+ T cell differential ODE model has been published [8] and can be downloaded from biomodels.net. The case study reproduces a biological phenomenon that is not captured by the previously published ODE model and shows the effectiveness of SDE as a stochastic modeling approach in biology in general and immunology in particular and the power of ENISI SDE.
A mathematical programming approach to stochastic and dynamic optimization problems
Bertsimas, D.
1994-12-31
We propose three ideas for constructing optimal or near-optimal policies: (1) for systems for which we have an exact characterization of the performance space we outline an adaptive greedy algorithm that gives rise to indexing policies (we illustrate this technique in the context of indexable systems); (2) we use integer programming to construct policies from the underlying descriptions of the performance space (we illustrate this technique in the context of polling systems); (3) we use linear control over polyhedral regions to solve deterministic versions for this class of problems. This approach gives interesting insights for the structure of the optimal policy (we illustrate this idea in the context of multiclass queueing networks). The unifying theme in the paper is the thesis that better formulations lead to deeper understanding and better solution methods. Overall the proposed approach for stochastic and dynamic optimization parallels efforts of the mathematical programming community in the last fifteen years to develop sharper formulations (polyhedral combinatorics and more recently nonlinear relaxations) and leads to new insights ranging from a complete characterization and new algorithms for indexable systems to tight lower bounds and new algorithms with provable a posteriori guarantees for their suboptimality for polling systems, multiclass queueing and loss networks.
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured
Text Classification Using ESC-Based Stochastic Decision Lists.
ERIC Educational Resources Information Center
Li, Hang; Yamanishi, Kenji
2002-01-01
Proposes a new method of text classification using stochastic decision lists, ordered sequences of IF-THEN-ELSE rules. The method can be viewed as a rule-based method for text classification having advantages of readability and refinability of acquired knowledge. Advantages of rule-based methods over non-rule-based ones are empirically verified.…
Zhijie Xu
2014-07-01
We present a new stochastic analysis for steady and transient one-dimensional heat conduction problem based on the homogenization approach. Thermal conductivity is assumed to be a random field K consisting of random variables of a total number N. Both steady and transient solutions T are expressed in terms of the homogenized solution (symbol) and its spatial derivatives (equation), where homogenized solution (symbol) is obtained by solving the homogenized equation with effective thermal conductivity. Both mean and variance of stochastic solutions can be obtained analytically for K field consisting of independent identically distributed (i.i.d) random variables. The mean and variance of T are shown to be dependent only on the mean and variance of these i.i.d variables, not the particular form of probability distribution function of i.i.d variables. Variance of temperature field T can be separated into two contributions: the ensemble contribution (through the homogenized temperature (symbol)); and the configurational contribution (through the random variable Ln(x)Ln(x)). The configurational contribution is shown to be proportional to the local gradient of (symbol). Large uncertainty of T field was found at locations with large gradient of (symbol) due to the significant configurational contributions at these locations. Numerical simulations were implemented based on a direct Monte Carlo method and good agreement is obtained between numerical Monte Carlo results and the proposed stochastic analysis.
Calculation of a double reactive azeotrope using stochastic optimization approaches
NASA Astrophysics Data System (ADS)
Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus
2013-02-01
An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.
NASA Astrophysics Data System (ADS)
Wang, Qing; Yao, Jing-Zheng
2010-12-01
Several algorithms were proposed relating to the development of a framework of the perturbation-based stochastic finite element method (PSFEM) for large variation nonlinear dynamic problems. For this purpose, algorithms and a framework related to SFEM based on the stochastic virtual work principle were studied. To prove the validity and practicality of the algorithms and framework, numerical examples for nonlinear dynamic problems with large variations were calculated and compared with the Monte-Carlo Simulation method. This comparison shows that the proposed approaches are accurate and effective for the nonlinear dynamic analysis of structures with random parameters.
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.
NASA Astrophysics Data System (ADS)
Carlen, Eric Anders
In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.
An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments
NASA Astrophysics Data System (ADS)
Kang, Yuncheol; Prabhu, Vittal
The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.
Stochastic Coloured Petrinet Based Healthcare Infrastructure Interdependency Model
NASA Astrophysics Data System (ADS)
Nukavarapu, Nivedita; Durbha, Surya
2016-06-01
The Healthcare Critical Infrastructure (HCI) protects all sectors of the society from hazards such as terrorism, infectious disease outbreaks, and natural disasters. HCI plays a significant role in response and recovery across all other sectors in the event of a natural or manmade disaster. However, for its continuity of operations and service delivery HCI is dependent on other interdependent Critical Infrastructures (CI) such as Communications, Electric Supply, Emergency Services, Transportation Systems, and Water Supply System. During a mass casualty due to disasters such as floods, a major challenge that arises for the HCI is to respond to the crisis in a timely manner in an uncertain and variable environment. To address this issue the HCI should be disaster prepared, by fully understanding the complexities and interdependencies that exist in a hospital, emergency department or emergency response event. Modelling and simulation of a disaster scenario with these complexities would help in training and providing an opportunity for all the stakeholders to work together in a coordinated response to a disaster. The paper would present interdependencies related to HCI based on Stochastic Coloured Petri Nets (SCPN) modelling and simulation approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The entire model would be integrated with Geographic information based decision support system to visualize the dynamic behaviour of the interdependency of the Healthcare and related CI network in a geographically based environment.
Bogen, K T
2007-05-11
A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage
Bieda, Bogusław
2013-01-01
The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. PMID:23194922
NASA Astrophysics Data System (ADS)
Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.
2013-07-01
Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model of the Arctic, and applicability and efficiency of the respective methods were examined. One optimization utilizes a finite difference (FD) method based on a traditional gradient descent approach, while the other adopts a micro-genetic algorithm (μGA) as an example of a stochastic approach. The optimizations were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. A series of optimizations were conducted that differ in the model formulation ("smoothed code" versus standard code) with respect to the FD method and in the population size and number of possibilities with respect to the μGA method. The FD method fails to estimate optimal parameters due to the ill-shaped nature of the cost function caused by the strong non-linearity of the system, whereas the genetic algorithms can effectively estimate near optimal parameters. The results of the study indicate that the sophisticated stochastic approach (μGA) is of practical use for parameter optimization of a coupled ocean-sea ice model with a medium-sized horizontal resolution of 50 km × 50 km as used in this study.
NASA Astrophysics Data System (ADS)
Lajus, D. L.; Sukhotin, A. A.
1998-06-01
One of the most effective techniques for evaluating stress is the analysis of developmental stability, measured by stochastic variation based particularly on fluctuating asymmetry, i.e. a variance in random deviations from perfect bilateral symmetry. However, the application of morphological methods is only possible when an organism lives under testing conditions during a significant part of its ontogenesis. Contrary to morphological characters, behavior can change very fast. Consequently, methods based on behavioural characters may have advantages over more traditional approaches. In this study we describe the technique of assessing stochastic variation, using not morphological, but behavioural characters. To measure stochastic variation of behavioural response, we assessed the stability of the isolation reaction of blue mussel Mytilus edulis at regular changes of salinity. With increasing temperature from +12°C to +20°C stochastic variation of the isolation reaction increased, which is a common response to change of environmental conditions. In this way, we have developed a method of assessing stochastic variation of behavioural response in molluscs. This method may find a great range of applications, because its usage does not require keeping animals in tested conditions for a long time.
Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach
ERIC Educational Resources Information Center
Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro
2005-01-01
Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…
Revisiting the Cape Cod Bacteria Injection Experiment Using a Stochastic Modeling Approach
Maxwell, R M; Welty, C; Harvey, R W
2006-11-22
Bromide and resting-cell bacteria tracer tests carried out in a sand and gravel aquifer at the USGS Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach and Lagrangian particle tracking numerical methods. Bacteria transport was strongly coupled to colloid filtration through functional dependence of local-scale colloid transport parameters on hydraulic conductivity and seepage velocity in a stochastic advection-dispersion/attachment-detachment model. Information on geostatistical characterization of the hydraulic conductivity (K) field from a nearby plot was utilized as input that was unavailable when the original analysis was carried out. A finite difference model for groundwater flow and a particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data using the aforementioned geostatistical parameters. An optimization routine was utilized to adjust the mean and variance of the lnK field over 100 realizations such that a best fit of a simulated, average bromide breakthrough curve is achieved. Once the optimal bromide fit was accomplished (based on adjusting the lnK statistical parameters in unconditional simulations), a stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of the mean bacteria breakthrough data were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech [1] equation for estimating single collector efficiency were compared to those using the Rajagopalan and Tien [2] model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions, with the Rajagopalan and Tien model yielding approximately a 30% lower peak concentration and less tailing than the Tufenkji and Elimelech formulation. Simulations using a distribution
A cavitation model based on Eulerian stochastic fields
NASA Astrophysics Data System (ADS)
Magagnato, F.; Dumond, J.
2013-12-01
Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
Heydari, M.H.; Hooshmandasl, M.R.; Maalek Ghaini, F.M.; Cattani, C.
2014-08-01
In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show the accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.
Exploring stochasticity and imprecise knowledge based on linear inequality constraints.
Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf
2016-09-01
This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge. PMID:26746217
Exploring stochasticity and imprecise knowledge based on linear inequality constraints.
Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf
2016-09-01
This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. PMID:24996205
A stochastic analysis approach for the calculation of hydrodynamic dampings
Karadeniz, H.
1995-12-31
This paper introduces an alternative linearization algorithm for nonlinear loading terms occurring in the spectral analysis of offshore structures. The algorithm makes use of member consistent forces for the linearization unlike the traditional linearization method. Different linearization criteria are used for different components of the member consistent forces. An equivalent second moment criterion is used to linearize the force component due to wave velocities while the components due to current and structural velocities are kept being stochastic. Calculation of their mean values is presented for the analysis. A deterministic added mass matrix and a stochastic hydrodynamic damping matrix are derived from the force component due to structural deformations. It is demonstrated that the mean value hydrodynamic damping ratios which are calculated in the paper are more realistic than those resulted from the linearization of the Morison`s equation.
An integrated fuzzy-stochastic modeling approach for risk assessment of groundwater contamination.
Li, Jianbing; Huang, Gordon H; Zeng, Guangming; Maqsood, Imran; Huang, Yuefei
2007-01-01
An integrated fuzzy-stochastic risk assessment (IFSRA) approach was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with site conditions, environmental guidelines, and health impact criteria. The contaminant concentrations in groundwater predicted from a numerical model were associated with probabilistic uncertainties due to the randomness in modeling input parameters, while the consequences of contaminant concentrations violating relevant environmental quality guidelines and health evaluation criteria were linked with fuzzy uncertainties. The contaminant of interest in this study was xylene. The environmental quality guideline was divided into three different strictness categories: "loose", "medium" and "strict". The environmental-guideline-based risk (ER) and health risk (HR) due to xylene ingestion were systematically examined to obtain the general risk levels through a fuzzy rule base. The ER and HR risk levels were divided into five categories of "low", "low-to-medium", "medium", "medium-to-high" and "high", respectively. The general risk levels included six categories ranging from "low" to "very high". The fuzzy membership functions of the related fuzzy events and the fuzzy rule base were established based on a questionnaire survey. Thus the IFSRA integrated fuzzy logic, expert involvement, and stochastic simulation within a general framework. The robustness of the modeling processes was enhanced through the effective reflection of the two types of uncertainties as compared with the conventional risk assessment approaches. The developed IFSRA was applied to a petroleum-contaminated groundwater system in western Canada. Three scenarios with different environmental quality guidelines were analyzed, and reasonable results were obtained. The risk assessment approach developed in this study offers a unique tool for systematically quantifying various uncertainties in contaminated site management, and it also
Stochastic dominance: an approach to decision making under risk.
Buckley, J J
1986-03-01
This paper introduces stochastic dominance as a technique to reduce the set of possible actions that a decision maker must consider in a decision problem under risk. The procedure usually does not choose an optimal action, but instead eliminates certain actions as unacceptable. Very little need be known about the decision maker's utility function. Two possible applications are presented: upgrading buildings to better withstand an earthquake; and choosing a site for a LNG facility.
Robust synthetic biology design: stochastic game theory approach
Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching
2009-01-01
Motivation: Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. Results: A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi–Sugeno (T–S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. Availability: http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf Contact: bschen@ee.nthu.edu.tw Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19435742
NASA Astrophysics Data System (ADS)
Dean, D. W.; Illangasekare, T. H.; Turner, A.; Russell, T. F.
2004-12-01
Modeling of the complex behavior of DNAPLs in naturally heterogeneous subsurface formations poses many challenges. Even though considerable progress have been made in developing improved numerical schemes to solve the governing partial differential equations, most of these methods still rely on deterministic description of the processes. This research explores the use of stochastic differential equations to model multiphase flow in heterogeneous aquifers, specifically the flow of DNAPLs in saturated soils. The models developed are evaluated using experimental data generated in two-dimensional test systems. A fundamental assumption used in the model formulation is that the movement of a fluid particle in each phase is described by a stochastic process and that the positions of all fluid particles over time are governed by a specific law. It is this law, which we seek to determine. The approach results in a nonlinear stochastic differential equation describing the position of the non-wetting phase fluid particle. The nonlinearity in the stochastic differential equation arises because both the drift and diffusion coefficients depend on the volumetric fraction of the phase, which in turn depends on the position of the fluid particles in the problem domain. The concept of a fluid particle is central to the development of the proposed model. Expressions for both saturation and volumetric fraction are developed using this concept of fluid particle. Darcy's law and the continuity equation are used to derive a Fokker-Planck equation governing flow. The Ito calculus is then applied to derive a stochastic differential equation(SDE) for the non-wetting phase. This SDE has both drift and diffusion terms which depend on the volumetric fraction of the non-wetting phase. Standard stochastic theories based on the Ito calculus and the Wiener process and the equivalent Fokker-Planck PDE's are typically used to model diffusion processes. However, these models, in their usual form
NASA Astrophysics Data System (ADS)
Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.
2012-11-01
Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model, and applicability and efficiency of the respective methods were examined. One is a finite difference method based on a traditional gradient descent approach, while the other adopts genetic algorithms as an example of stochastic approaches. Several series of parameter optimization experiments were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. The finite difference method fails to estimate optimal parameters due to an ill-shaped nature of the cost function, whereas the genetic algorithms can effectively estimate near optimal parameters with a practical number of iterations. The results of the study indicate that a sophisticated stochastic approach is of practical use to a parameter optimization of a coupled ocean-sea ice model.
Intervention-Based Stochastic Disease Eradication
NASA Astrophysics Data System (ADS)
Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira
2013-03-01
Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204
NASA Astrophysics Data System (ADS)
Kim, Kwang-Ki K.; Braatz, Richard D.
2013-08-01
This paper considers the model predictive control of dynamic systems subject to stochastic uncertainties due to parametric uncertainties and exogenous disturbance. The effects of uncertainties are quantified using generalised polynomial chaos expansions with an additive Gaussian random process as the exogenous disturbance. With Gaussian approximation of the resulting solution trajectory of a stochastic differential equation using generalised polynomial chaos expansion, convex finite-horizon model predictive control problems are solved that are amenable to online computation of a stochastically robust control policy over the time horizon. Using generalised polynomial chaos expansions combined with convex relaxation methods, the probabilistic constraints are replaced by convex deterministic constraints that approximate the probabilistic violations. This approach to chance-constrained model predictive control provides an explicit way to handle a stochastic system model in the presence of both model uncertainty and exogenous disturbances.
Stochastic approach to reconstruction of dynamical systems: optimal model selection criterion
NASA Astrophysics Data System (ADS)
Gavrilov, A.; Mukhin, D.; Loskutov, E. M.; Feigin, A. M.
2011-12-01
Most of known observable systems are complex and high-dimensional that doesn't allow to make the exact long-term forecast of their behavior. The stochastic approach to reconstruction of such systems gives a hope to describe important qualitative features of their behavior in a low-dimensional way while all other dynamics is modelled as stochastic disturbance. This report is devoted to application of Bayesian evidence for optimal stochastic model selection when reconstructing the evolution operator of observable system. The idea of Bayesian evidence is to find compromise between the model predictiveness and quality of fitting the model into the data. We represent the evolution operator of investigated system in a form of random dynamic system including deterministic and stochastic parts, both parameterized by artificial neural network. Then we use Bayesian evidence criterion to estimate optimal complexity of the model, i.e. both number of parameters and dimension corresponding to most probable model given the data. We demonstrate on the number of model examples that the model with non-uniformly distributed stochastic part (which corresponds to non-Gaussian perturbations of evolution operator) is optimal in general case. Further, we show that simple stochastic model can be the most preferred for reconstruction of the evolution operator underlying complex observed dynamics even in a case of deterministic high-dimensional system. Workability of suggested approach for modeling and prognosis of real-measured geophysical dynamics is investigated.
a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks
NASA Astrophysics Data System (ADS)
Bottacin-Busolin, A.; Worman, A. L.
2013-12-01
A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance
A stochastic approach to the hadron spectrum. III
Aron, J.C.
1986-12-01
The connection with the quarks of the stochastic model proposed in the two preceding papers is studied; the slopes of the baryon trajectories are calculated with reference to the quarks. Suggestions are made for the interpretation of the model (quadratic or linear addition of the contributions to the mass, dependence of the decay on the quantum numbers of the hadrons involved, etc.) and concerning its link with the quarkonium model, which describes the mesons with charm or beauty. The controversial question of the ''subquantum level'' is examined.
Modified stochastic variational approach to non-Hermitian quantum systems
NASA Astrophysics Data System (ADS)
Kraft, Daniel; Plessas, Willibald
2016-08-01
The stochastic variational method has proven to be a very efficient and accurate tool to calculate especially bound states of quantum-mechanical few-body systems. It relies on the Rayleigh-Ritz variational principle for minimizing real eigenenergies of Hermitian Hamiltonians. From molecular to atomic, nuclear, and particle physics there is actually a great demand of describing also resonant states to a high degree of reliance. This is especially true with regard to hadron resonances, which have to be treated in a relativistic framework. So far standard methods of dealing with quantum chromodynamics have not yet succeeded in describing hadron resonances in a realistic manner. Resonant states can be handled by non-Hermitian quantum Hamiltonians. These states correspond to poles in the lower half of the unphysical sheet of the complex energy plane and are therefore intimately connected with complex eigenvalues. Consequently the Rayleigh-Ritz variational principle cannot be employed in the usual manner. We have studied alternative selection principles for the choice of test functions to treat resonances along the stochastic variational method. We have found that a stationarity principle for the complex energy eigenvalues provides a viable method for selecting test functions for resonant states in a constructive manner. We discuss several variants thereof and exemplify their practical efficiencies.
Time-Frequency Approach for Stochastic Signal Detection
Ghosh, Ripul; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2011-10-20
The detection of events in a stochastic signal has been a subject of great interest. One of the oldest signal processing technique, Fourier Transform of a signal contains information regarding frequency content, but it cannot resolve the exact onset of changes in the frequency, all temporal information is contained in the phase of the transform. On the other hand, Spectrogram is better able to resolve temporal evolution of frequency content, but has a trade-off in time resolution versus frequency resolution in accordance with the uncertainty principle. Therefore, time-frequency representations are considered for energetic characterisation of the non-stationary signals. Wigner Ville Distribution (WVD) is the most prominent quadratic time-frequency signal representation and used for analysing frequency variations in signals.WVD allows for instantaneous frequency estimation at each data point, for a typical temporal resolution of fractions of a second. This paper through simulations describes the way time frequency models are applied for the detection of event in a stochastic signal.
A fast and scalable recurrent neural network based on stochastic meta descent.
Liu, Zhenzhen; Elhanany, Itamar
2008-09-01
This brief presents an efficient and scalable online learning algorithm for recurrent neural networks (RNNs). The approach is based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated with either its input or output links. This yields a reduced storage and computational complexity of O(N(2)). Stochastic meta descent (SMD), an adaptive step size scheme for stochastic gradient-descent problems, is employed as means of incorporating curvature information in order to substantially accelerate the learning process. We also introduce a clustered version of our algorithm to further improve its scalability attributes. Despite the dramatic reduction in resource requirements, it is shown through simulation results that the approach outperforms regular RTRL by almost an order of magnitude. Moreover, the scheme lends itself to parallel hardware realization by virtue of the localized property that is inherent to the learning framework. PMID:18779096
NASA Astrophysics Data System (ADS)
Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.
2011-12-01
A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.
Kruk, D; Earle, K A; Mielczarek, A; Kubica, A; Milewska, A; Moscicki, J
2011-12-14
A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000)] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed. PMID:22168707
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Integral-based event triggering controller design for stochastic LTI systems via convex optimisation
NASA Astrophysics Data System (ADS)
Mousavi, S. H.; Marquez, H. J.
2016-07-01
The presence of measurement noise in the event-based systems can lower system efficiency both in terms of data exchange rate and performance. In this paper, an integral-based event triggering control system is proposed for LTI systems with stochastic measurement noise. We show that the new mechanism is robust against noise and effectively reduces the flow of communication between plant and controller, and also improves output performance. Using a Lyapunov approach, stability in the mean square sense is proved. A simulated example illustrates the properties of our approach.
Stochastic queueing-theory approach to human dynamics
NASA Astrophysics Data System (ADS)
Walraevens, Joris; Demoor, Thomas; Maertens, Tom; Bruneel, Herwig
2012-02-01
Recently, numerous studies have shown that human dynamics cannot be described accurately by exponential laws. For instance, Barabási [Nature (London)NATUAS0028-083610.1038/nature03459 435, 207 (2005)] demonstrates that waiting times of tasks to be performed by a human are more suitably modeled by power laws. He presumes that these power laws are caused by a priority selection mechanism among the tasks. Priority models are well-developed in queueing theory (e.g., for telecommunication applications), and this paper demonstrates the (quasi-)immediate applicability of such a stochastic priority model to human dynamics. By calculating generating functions and by studying them in their dominant singularity, we prove that nonexponential tails result naturally. Contrary to popular belief, however, these are not necessarily triggered by the priority selection mechanism.
Stochastic Modeling Approach to the Incubation Time of Prionic Diseases
NASA Astrophysics Data System (ADS)
Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.
2003-05-01
Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.
Two-state approach to stochastic hair bundle dynamics.
Clausznitzer, Diana; Lindner, Benjamin; Jülicher, Frank; Martin, Pascal
2008-04-01
Hair cells perform the mechanoelectrical transduction of sound signals in the auditory and vestibular systems of vertebrates. The part of the hair cell essential for this transduction is the so-called hair bundle. In vitro experiments on hair cells from the sacculus of the American bullfrog have shown that the hair bundle comprises active elements capable of producing periodic deflections like a relaxation oscillator. Recently, a continuous nonlinear stochastic model of the hair bundle motion [Nadrowski, Proc. Natl. Acad. Sci. U.S.A. 101, 12195 (2004)] has been shown to reproduce the experimental data in stochastic simulations faithfully. Here, we demonstrate that a binary filtering of the hair bundle's deflection (experimental data and continuous hair bundle model) does not change significantly the spectral statistics of the spontaneous as well as the periodically driven hair bundle motion. We map the continuous hair bundle model to the FitzHugh-Nagumo model of neural excitability and discuss the bifurcations between different regimes of the system in terms of the latter model. Linearizing the nullclines and assuming perfect time-scale separation between the variables we can map the FitzHugh-Nagumo system to a simple two-state model in which each of the states corresponds to the two possible values of the binary-filtered hair bundle trajectory. For the two-state model, analytical expressions for the power spectrum and the susceptibility can be calculated [Lindner and Schimansky-Geier, Phys. Rev. E 61, 6103 (2000)] and show the same features as seen in the experimental data as well as in simulations of the continuous hair bundle model. PMID:18517650
Two-state approach to stochastic hair bundle dynamics
NASA Astrophysics Data System (ADS)
Clausznitzer, Diana; Lindner, Benjamin; Jülicher, Frank; Martin, Pascal
2008-04-01
Hair cells perform the mechanoelectrical transduction of sound signals in the auditory and vestibular systems of vertebrates. The part of the hair cell essential for this transduction is the so-called hair bundle. In vitro experiments on hair cells from the sacculus of the American bullfrog have shown that the hair bundle comprises active elements capable of producing periodic deflections like a relaxation oscillator. Recently, a continuous nonlinear stochastic model of the hair bundle motion [Nadrowski , Proc. Natl. Acad. Sci. U.S.A. 101, 12195 (2004)] has been shown to reproduce the experimental data in stochastic simulations faithfully. Here, we demonstrate that a binary filtering of the hair bundle's deflection (experimental data and continuous hair bundle model) does not change significantly the spectral statistics of the spontaneous as well as the periodically driven hair bundle motion. We map the continuous hair bundle model to the FitzHugh-Nagumo model of neural excitability and discuss the bifurcations between different regimes of the system in terms of the latter model. Linearizing the nullclines and assuming perfect time-scale separation between the variables we can map the FitzHugh-Nagumo system to a simple two-state model in which each of the states corresponds to the two possible values of the binary-filtered hair bundle trajectory. For the two-state model, analytical expressions for the power spectrum and the susceptibility can be calculated [Lindner and Schimansky-Geier, Phys. Rev. E 61, 6103 (2000)] and show the same features as seen in the experimental data as well as in simulations of the continuous hair bundle model.
NASA Astrophysics Data System (ADS)
Loch-Dehbi, S.; Dehbi, Y.; Gröger, G.; Plümer, L.
2016-10-01
This paper introduces a novel method for the automatic derivation of building floorplans and indoor models. Our approach is based on a logical and stochastic reasoning using sparse observations such as building room areas. No further sensor observations like 3D point clouds are needed. Our method benefits from an extensive prior knowledge of functional dependencies and probability density functions of shape and location parameters of rooms depending on their functional use. The determination of posterior beliefs is performed using Bayesian Networks. Stochastic reasoning is complex since the problem is characterized by a mixture of discrete and continuous parameters that are in turn correlated by non-linear constraints. To cope with this kind of complexity, the proposed reasoner combines statistical methods with constraint propagation. It generates a limited number of hypotheses in a model-based top-down approach. It predicts floorplans based on a-priori localised windows. The use of Gaussian mixture models, constraint solvers and stochastic models helps to cope with the a-priori infinite space of the possible floorplan instantiations.
Zabaras, N. Ganapathysubramanian, B.
2008-04-20
Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.
NASA Astrophysics Data System (ADS)
Zabaras, N.; Ganapathysubramanian, B.
2008-04-01
Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.
Broadband seismic monitoring of active volcanoes using deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Kumagai, H.; Nakano, M.; Maeda, T.; Yepes, H.; Palacios, P.; Ruiz, M. C.; Arrais, S.; Vaca, M.; Molina, I.; Yamashina, T.
2009-12-01
We systematically used two approaches to analyze broadband seismic signals observed at active volcanoes: one is waveform inversion of very-long-period (VLP) signals in the frequency domain assuming possible source mechanisms; the other is a source location method of long-period (LP) and tremor using their amplitudes. The deterministic approach of the waveform inversion is useful to constrain the source mechanism and location, but is basically only applicable to VLP signals with periods longer than a few seconds. The source location method uses seismic amplitudes corrected for site amplifications and assumes isotropic radiation of S waves. This assumption of isotropic radiation is apparently inconsistent with the hypothesis of crack geometry at the LP source. Using the source location method, we estimated the best-fit source location of a VLP/LP event at Cotopaxi using a frequency band of 7-12 Hz and Q = 60. This location was close to the best-fit source location determined by waveform inversion of the VLP/LP event using a VLP band of 5-12.5 s. The waveform inversion indicated that a crack mechanism better explained the VLP signals than an isotropic mechanism. These results indicated that isotropic radiation is not inherent to the source and only appears at high frequencies. We also obtained a best-fit location of an explosion event at Tungurahua when using a frequency band of 5-10 Hz and Q = 60. This frequency band and Q value also yielded reasonable locations for the sources of tremor signals associated with lahars and pyroclastic flows at Tungurahua. The isotropic radiation assumption may be valid in a high frequency range in which the path effect caused by the scattering of seismic waves results in an isotropic radiation pattern of S waves. The source location method may be categorized as a stochastic approach based on the nature of scattering waves. We further applied the waveform inversion to VLP signals observed at only two stations during a volcanic crisis
A stochastic process approach of the drake equation parameters
NASA Astrophysics Data System (ADS)
Glade, Nicolas; Ballet, Pascal; Bastien, Olivier
2012-04-01
The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.
A Stochastic Approach to Noise Modeling for Barometric Altimeters
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-01-01
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions. PMID:24253189
A stochastic approach to noise modeling for barometric altimeters.
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-01-01
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions. PMID:24253189
A stochastic optimization approach for integrated urban water resource planning.
Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X
2013-01-01
Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change.
A stochastic approach to noise modeling for barometric altimeters.
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-11-18
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2014-10-07
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.
NASA Astrophysics Data System (ADS)
Vrac, M. R.; Hayhoe, K.; Stein, M.
2005-12-01
Downscaling methods try to derive local-scale values or characteristics from large-scale information such as AOGCM outputs. These methods can be useful to adress an issue of the climate change from a local point of view by understanding how this change will interact with existing local environmental features. Regional climate assessments require continuous time series for multiple scenarios and AOGCM drivers. This computational task is nowadays out of range of most of dynamical downscaling models. Here, advanced statistical clustering methods are applied to define original atmospheric patterns, that will be included as the bases of a nonhomogeneous stochastic weather typing approach. This method provides accurate and rapid simulations of local-scale precipitation features for 37 raingauges in Illinois at low computational cost. Two different kinds of atmospheric states are defined: "circulation" patterns - developed by a model based method applied to large scale NCEP reanalysis data - and "precipitation" patterns - obtained through a hierarchical ascending clustering method applied directly to the observed rainfall amounts on Illinois with an original metric. By modelling the transition probabilities from one pattern to another by a nonhomogeneous Markov model - i.e. influenced by some large scale atmospheric variables such as geopotential heights, humidity and dew point temperature depression - we see that the precipitation states allow us to model conditional distributions of precipitation given the current weather state - and then to simulate local precipitation intensities - more accurately than with the traditional approach based on upper-air circulation patterns alone.
Stochastic multiscale modelling of cortical bone elasticity based on high-resolution imaging.
Sansalone, Vittorio; Gagliardi, Davide; Desceliers, Christophe; Bousson, Valérie; Laredo, Jean-Denis; Peyrin, Françoise; Haïat, Guillaume; Naili, Salah
2016-02-01
Accurate and reliable assessment of bone quality requires predictive methods which could probe bone microstructure and provide information on bone mechanical properties. Multiscale modelling and simulation represent a fast and powerful way to predict bone mechanical properties based on experimental information on bone microstructure as obtained through X-ray-based methods. However, technical limitations of experimental devices used to inspect bone microstructure may produce blurry data, especially in in vivo conditions. Uncertainties affecting the experimental data (input) may question the reliability of the results predicted by the model (output). Since input data are uncertain, deterministic approaches are limited and new modelling paradigms are required. In this paper, a novel stochastic multiscale model is developed to estimate the elastic properties of bone while taking into account uncertainties on bone composition. Effective elastic properties of cortical bone tissue were computed using a multiscale model based on continuum micromechanics. Volume fractions of bone components (collagen, mineral, and water) were considered as random variables whose probabilistic description was built using the maximum entropy principle. The relevance of this approach was proved by analysing a human bone sample taken from the inferior femoral neck. The sample was imaged using synchrotron radiation micro-computed tomography. 3-D distributions of Haversian porosity and tissue mineral density extracted from these images supplied the experimental information needed to build the stochastic models of the volume fractions. Thus, the stochastic multiscale model provided reliable statistical information (such as mean values and confidence intervals) on bone elastic properties at the tissue scale. Moreover, the existence of a simpler "nominal model", accounting for the main features of the stochastic model, was investigated. It was shown that such a model does exist, and its relevance
Zhang, Jinjing; Zhang, Tao
2015-02-01
The parameter-induced stochastic resonance based on spectral entropy (PSRSE) method is introduced for the detection of a very weak signal in the presence of strong noise. The effect of stochastic resonance on the detection is optimized using parameters obtained in spectral entropy analysis. Upon processing employing the PSRSE method, the amplitude of the weak signal is enhanced and the noise power is reduced, so that the frequency of the signal can be estimated with greater precision through spectral analysis. While the improvement in the signal-to-noise ratio is similar to that obtained using the Duffing oscillator algorithm, the computational cost reduces from O(N(2)) to O(N). The PSRSE approach is applied to the frequency measurement of a weak signal made by a vortex flow meter. The results are compared with those obtained applying the Duffing oscillator algorithm. PMID:25725879
Zhang, Jinjing; Zhang, Tao
2015-02-15
The parameter-induced stochastic resonance based on spectral entropy (PSRSE) method is introduced for the detection of a very weak signal in the presence of strong noise. The effect of stochastic resonance on the detection is optimized using parameters obtained in spectral entropy analysis. Upon processing employing the PSRSE method, the amplitude of the weak signal is enhanced and the noise power is reduced, so that the frequency of the signal can be estimated with greater precision through spectral analysis. While the improvement in the signal-to-noise ratio is similar to that obtained using the Duffing oscillator algorithm, the computational cost reduces from O(N{sup 2}) to O(N). The PSRSE approach is applied to the frequency measurement of a weak signal made by a vortex flow meter. The results are compared with those obtained applying the Duffing oscillator algorithm.
NASA Astrophysics Data System (ADS)
Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra
2004-08-01
In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.
Zhang, Jinjing; Zhang, Tao
2015-02-01
The parameter-induced stochastic resonance based on spectral entropy (PSRSE) method is introduced for the detection of a very weak signal in the presence of strong noise. The effect of stochastic resonance on the detection is optimized using parameters obtained in spectral entropy analysis. Upon processing employing the PSRSE method, the amplitude of the weak signal is enhanced and the noise power is reduced, so that the frequency of the signal can be estimated with greater precision through spectral analysis. While the improvement in the signal-to-noise ratio is similar to that obtained using the Duffing oscillator algorithm, the computational cost reduces from O(N(2)) to O(N). The PSRSE approach is applied to the frequency measurement of a weak signal made by a vortex flow meter. The results are compared with those obtained applying the Duffing oscillator algorithm.
Stochastically optimized monocular vision-based navigation and guidance
NASA Astrophysics Data System (ADS)
Watanabe, Yoko
-effort guidance (MEG) law for multiple target tracking is applied for a guidance design to achieve the mission. Through simulations, it is shown that the control effort can be reduced by using the MEG-based guidance design instead of a conventional proportional navigation-based one. The navigation and guidance designs are implemented and evaluated in a 6 DoF UAV flight simulation. Furthermore, the vision-based obstacle avoidance system is also tested in a flight test using a balloon as an obstacle. For monocular vision-based control problems, it is well-known that the separation principle between estimation and control does not hold. In other words, that vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Unlike many other works on observer trajectory optimization, this thesis suggests a stochastically optimized guidance design that minimizes the expected value of a cost function of the guidance error and the control effort subject to the EKF prediction and update procedures. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization, in which the optimization is performed under the assumption that there will be only one more final measurement at the one time step ahead. The OSA suboptimal guidance law is applied to problems of vision-based rendezvous and vision-based obstacle avoidance. Simulation results are presented to show that the suggested guidance law significantly improves the guidance performance. The OSA suboptimal optimization approach is generalized as the n-step-ahead (nSA) optimization for an arbitrary number of n. Furthermore, the nSA suboptimal guidance law is extended to the p %-ahead suboptimal guidance by changing the value of n at each time step depending on the current time. The nSA (including the OSA) and
Reliability-based design optimization under stationary stochastic process loads
NASA Astrophysics Data System (ADS)
Hu, Zhen; Du, Xiaoping
2016-08-01
Time-dependent reliability-based design ensures the satisfaction of reliability requirements for a given period of time, but with a high computational cost. This work improves the computational efficiency by extending the sequential optimization and reliability analysis (SORA) method to time-dependent problems with both stationary stochastic process loads and random variables. The challenge of the extension is the identification of the most probable point (MPP) associated with time-dependent reliability targets. Since a direct relationship between the MPP and reliability target does not exist, this work defines the concept of equivalent MPP, which is identified by the extreme value analysis and the inverse saddlepoint approximation. With the equivalent MPP, the time-dependent reliability-based design optimization is decomposed into two decoupled loops: deterministic design optimization and reliability analysis, and both are performed sequentially. Two numerical examples are used to show the efficiency of the proposed method.
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
On the Performance of Stochastic Model-Based Image Segmentation
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Sewchand, Wilfred
1989-11-01
A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357
Scott, Bobby, R., Ph.D.
2003-06-27
OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on
Stochastic Approach to Phonon-Assisted Optical Absorption
NASA Astrophysics Data System (ADS)
Zacharias, Marios; Patrick, Christopher E.; Giustino, Feliciano
2015-10-01
We develop a first-principles theory of phonon-assisted optical absorption in semiconductors and insulators which incorporates the temperature dependence of the electronic structure. We show that the Hall-Bardeen-Blatt theory of indirect optical absorption and the Allen-Heine theory of temperature-dependent band structures can be derived from the present formalism by retaining only one-phonon processes. We demonstrate this method by calculating the optical absorption coefficient of silicon using an importance sampling Monte Carlo scheme, and we obtain temperature-dependent line shapes and band gaps in good agreement with experiment. The present approach opens the way to predictive calculations of the optical properties of solids at finite temperature.
Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.
Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A
2007-12-01
By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.
Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach
NASA Astrophysics Data System (ADS)
Rosso, O. A.; Zunino, L.; Pérez, D. G.; Figliola, A.; Larrondo, H. A.; Garavaglia, M.; Martín, M. T.; Plastino, A.
2007-12-01
By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.
Pacini, Simone
2014-01-01
Mesenchymal stromal cells (MSCs) have enormous intrinsic clinical value due to their multi-lineage differentiation capacity, support of hemopoiesis, immunoregulation and growth factors/cytokines secretion. MSCs have thus been the object of extensive research for decades. After completion of many pre-clinical and clinical trials, MSC-based therapy is now facing a challenging phase. Several clinical trials have reported moderate, non-durable benefits, which caused initial enthusiasm to wane, and indicated an urgent need to optimize the efficacy of therapeutic, platform-enhancing MSC-based treatment. Recent investigations suggest the presence of multiple in vivo MSC ancestors in a wide range of tissues, which contribute to the heterogeneity of the starting material for the expansion of MSCs. This variability in the MSC culture-initiating cell population, together with the different types of enrichment/isolation and cultivation protocols applied, are hampering progress in the definition of MSC-based therapies. International regulatory statements require a precise risk/benefit analysis, ensuring the safety and efficacy of treatments. GMP validation allows for quality certification, but the prediction of a clinical outcome after MSC-based therapy is correlated not only to the possible morbidity derived by cell production process, but also to the biology of the MSCs themselves, which is highly sensible to unpredictable fluctuation of isolating and culture conditions. Risk exposure and efficacy of MSC-based therapies should be evaluated by pre-clinical studies, but the batch-to-batch variability of the final medicinal product could significantly limit the predictability of these studies. The future success of MSC-based therapies could lie not only in rational optimization of therapeutic strategies, but also in a stochastic approach during the assessment of benefit and risk factors.
A wavelet-based computational method for solving stochastic Itô–Volterra integral equations
Mohammadi, Fakhrodin
2015-10-01
This paper presents a computational method based on the Chebyshev wavelets for solving stochastic Itô–Volterra integral equations. First, a stochastic operational matrix for the Chebyshev wavelets is presented and a general procedure for forming this matrix is given. Then, the Chebyshev wavelets basis along with this stochastic operational matrix are applied for solving stochastic Itô–Volterra integral equations. Convergence and error analysis of the Chebyshev wavelets basis are investigated. To reveal the accuracy and efficiency of the proposed method some numerical examples are included.
On a stochastic approach to a code performance estimation
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.
2016-06-01
The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.
Robustness and security assessment of image watermarking techniques by a stochastic approach
NASA Astrophysics Data System (ADS)
Conotter, V.; Boato, G.; Fontanari, C.; De Natale, F. G. B.
2009-02-01
In this paper we propose to evaluate both robustness and security of digital image watermarking techniques by considering the perceptual quality of un-marked images in terms of Weightened PSNR. The proposed tool is based on genetic algorithms and is suitable for researchers to evaluate robustness performances of developed watermarking methods. Given a combination of selected attacks, the proposed framework looks for a fine parameterization of them ensuring a perceptual quality of the un-marked image lower than a given threshold. Correspondingly, a novel metric for robustness assessment is introduced. On the other hand, this tool results to be useful also in those scenarios where an attacker tries to remove the watermark to overcome copyright issues. Security assessment is provided by a stochastic search of the minimum degradation that needs to be introduced in order to obtain an un-marked version of the image as close as possible to the given one. Experimental results show the effectiveness of the proposed approach.
Runoff modelling using radar data and flow measurements in a stochastic state space approach.
Krämer, S; Grum, M; Verworn, H R; Redder, A
2005-01-01
In urban drainage the estimation of runoff with the help of models is a complex task. This is in part due to the fact that rainfall, the most important input to urban drainage modelling, is highly uncertain. Added to the uncertainty of rainfall is the complexity of performing accurate flow measurements. In terms of deterministic modelling techniques these are needed for calibration and evaluation of the applied model. Therefore, the uncertainties of rainfall and flow measurements have a severe impact on the model parameters and results. To overcome these problems a new methodology has been developed which is based on simple rain plane and runoff models that are incorporated into a stochastic state space model approach. The state estimation is done by using the extended Kalman filter in combination with a maximum likelihood criterion and an off-line optimization routine. This paper presents the results of this new methodology with respect to the combined consideration of uncertainties in distributed rainfall derived from radar data and uncertainties in measured flows in an urban catchment within the Emscher river basin, Germany.
NASA Astrophysics Data System (ADS)
Kerachian, Reza; Karamouz, Mohammad
2006-12-01
In this study, an algorithm combining a water quality simulation model and a deterministic/stochastic conflict resolution technique is developed for determining optimal reservoir operating rules. As different decision makers and stakeholders are involved in reservoir operation, the Nash bargaining theory is used to resolve the existing conflict of interests. The utility functions of the proposed models are developed on the basis of the reliability of the water supply to downstream demands, water storage, and the quality of the withdrawn water. The expected value on the Nash product is considered as the objective function of the stochastic model, which can incorporate the inherent uncertainty of reservoir inflow. A water quality simulation model is also developed to simulate the thermal stratification cycle and the reservoir discharge quality through a selective withdrawal structure. The optimization models are solved using a new version of genetic algorithms called varying chromosome length genetic algorithm (VLGA). In this algorithm the chromosome length is sequentially increased to provide a good initial solution for the final traditional GA-based optimization model. The proposed stochastic optimization model can also reduce the computational burden of the previously proposed models such as stochastic dynamic programming (SDP) by reducing the number of state transitions in each stage. The proposed models which are called VLGAQ and SVLGAQ are applied to the 15-Khordad Reservoir in the central part of Iran. The results show that the proposed models can reduce the salinity of allocated water to different water demands as well as the salinity buildup in the reservoir.
Stochastic switching in slow-fast systems: a large-fluctuation approach.
Heckman, Christoffer R; Schwartz, Ira B
2014-02-01
In this paper we develop a perturbation method to predict the rate of occurrence of rare events for singularly perturbed stochastic systems using a probability density function approach. In contrast to a stochastic normal form approach, we model rare event occurrences due to large fluctuations probabilistically and employ a WKB ansatz to approximate their rate of occurrence. This results in the generation of a two-point boundary value problem that models the interaction of the state variables and the most likely noise force required to induce a rare event. The resulting equations of motion of describing the phenomenon are shown to be singularly perturbed. Vastly different time scales among the variables are leveraged to reduce the dimension and predict the dynamics on the slow manifold in a deterministic setting. The resulting constrained equations of motion may be used to directly compute an exponent that determines the probability of rare events. To verify the theory, a stochastic damped Duffing oscillator with three equilibrium points (two sinks separated by a saddle) is analyzed. The predicted switching time between states is computed using the optimal path that resides in an expanded phase space. We show that the exponential scaling of the switching rate as a function of system parameters agrees well with numerical simulations. Moreover, the dynamics of the original system and the reduced system via center manifolds are shown to agree in an exponentially scaling sense. PMID:25353557
Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches
NASA Astrophysics Data System (ADS)
Egging, Rudolf Gerardus
This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in
Non-perturbative approach for curvature perturbations in stochastic δ N formalism
Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro E-mail: kawasaki@icrr.u-tokyo.ac.jp
2014-10-01
In our previous paper [1], we have proposed a new algorithm to calculate the power spectrum of the curvature perturbations generated in inflationary universe with use of the stochastic approach. Since this algorithm does not need the perturbative expansion with respect to the inflaton fields on super-horizon scale, it works even in highly stochastic cases. For example, when the curvature perturbations are very large or the non-Gaussianities of the curvature perturbations are sizable, the perturbative expansion may break down but our algorithm enables to calculate the curvature perturbations. We apply it to two well-known inflation models, chaotic and hybrid inflation, in this paper. Especially for hybrid inflation, while the potential is very flat around the critical point and the standard perturbative computation is problematic, we successfully calculate the curvature perturbations.
Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.
2009-01-01
Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.
Division time-based amplifiers for stochastic gene expression.
Wang, Haohua; Yuan, Zhanjiang; Liu, Peijiang; Zhou, Tianshou
2015-09-01
While cell-to-cell variability is a phenotypic consequence of gene expression noise, sources of this noise may be complex - apart from intrinsic sources such as the random birth/death of mRNA and stochastic switching between promoter states, there are also extrinsic sources of noise such as cell division where division times are either constant or random. However, how this time-based division affects gene expression as well as how it contributes to cell-to-cell variability remains unexplored. Using a computational model combined with experimental data, we show that the cell-cycle length defined as the difference between two sequential division times can significantly impact the expression dynamics. Specifically, we find that both divisions (constant or random) always increase the mean level of mRNA and lengthen the mean first passage time. In contrast to constant division, random division always amplifies expression noise but tends to stabilize its temporal level, and unimodalizes the mRNA distribution, but makes its tail longer. These qualitative results reveal that cell division based on time is an effective mechanism for both increasing expression levels and enhancing cell-to-cell variability.
NASA Astrophysics Data System (ADS)
Subagadis, Y. H.; Schütze, N.; Grundmann, J.
2014-09-01
The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.
Stochastic resonance-enhanced laser-based particle detector.
Dutta, A; Werner, C
2009-01-01
This paper presents a Laser-based particle detector whose response was enhanced by modulating the Laser diode with a white-noise generator. A Laser sheet was generated to cast a shadow of the object on a 200 dots per inch, 512 x 1 pixels linear sensor array. The Laser diode was modulated with a white-noise generator to achieve stochastic resonance. The white-noise generator essentially amplified the wide-bandwidth (several hundred MHz) noise produced by a reverse-biased zener diode operating in junction-breakdown mode. The gain in the amplifier in the white-noise generator was set such that the Receiver Operating Characteristics plot provided the best discriminability. A monofiber 40 AWG (approximately 80 microm) wire was detected with approximately 88% True Positive rate and approximately 19% False Positive rate in presence of white-noise modulation and with approximately 71% True Positive rate and approximately 15% False Positive rate in absence of white-noise modulation.
NASA Astrophysics Data System (ADS)
Nogueira, M.; Barros, A. P.; Miranda, P. M.
2012-04-01
Atmospheric fields can be extremely variable over wide ranges of spatial scales, with a scale ratio of 109-1010 between largest (planetary) and smallest (viscous dissipation) scale. Furthermore atmospheric fields with strong variability over wide ranges in scale most likely should not be artificially split apart into large and small scales, as in reality there is no scale separation between resolved and unresolved motions. Usually the effects of the unresolved scales are modeled by a deterministic bulk formula representing an ensemble of incoherent subgrid processes on the resolved flow. This is a pragmatic approach to the problem and not the complete solution to it. These models are expected to underrepresent the small-scale spatial variability of both dynamical and scalar fields due to implicit and explicit numerical diffusion as well as physically based subgrid scale turbulent mixing, resulting in smoother and less intermittent fields as compared to observations. Thus, a fundamental change in the way we formulate our models is required. Stochastic approaches equipped with a possible realization of subgrid processes and potentially coupled to the resolved scales over the range of significant scale interactions range provide one alternative to address the problem. Stochastic multifractal models based on the cascade phenomenology of the atmosphere and its governing equations in particular are the focus of this research. Previous results have shown that rain and cloud fields resulting from both idealized and realistic numerical simulations display multifractal behavior in the resolved scales. This result is observed even in the absence of scaling in the initial conditions or terrain forcing, suggesting that multiscaling is a general property of the nonlinear solutions of the Navier-Stokes equations governing atmospheric dynamics. Our results also show that the corresponding multiscaling parameters for rain and cloud fields exhibit complex nonlinear behavior
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Ohkubo, Jun
2015-10-01
An alternative application of duality relations of stochastic processes is demonstrated. Although conventional usages of the duality relations need analytical solutions for the dual processes, here I employ numerical solutions of the dual processes and investigate the usefulness. As a demonstration, estimation problems of hidden variables in stochastic differential equations are discussed. Employing algebraic probability theory, a little complicated birth-death process is derived from the stochastic differential equations, and an estimation method based on the ensemble Kalman filter is proposed. As a result, the possibility for making faster computational algorithms based on the duality concepts is shown.
Atzori, A S; Tedeschi, L O; Cannas, A
2013-05-01
The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21
NASA Astrophysics Data System (ADS)
Foster, T.; Butler, A. P.; McIntyre, N.
2012-12-01
Increasing water demands from growing populations coupled with changing water availability, for example due to climate change, are likely to increase water scarcity. Agriculture will be exposed to risk due to the importance of reliable water supplies as an input to crop production. To assess the efficiency of agricultural adaptation options requires a sound understanding of the relationship between crop growth and water application. However, most water resource planning models quantify agricultural water demand using highly simplified, temporally lumped estimated crop-water production functions (CWPFs). Such CWPFs fail to capture the biophysical complexities in crop-water relations and mischaracterise farmers ability to respond to water scarcity. Application of these models in policy analyses will be ineffective and may lead to unsustainable water policies. Crop simulation models provide an alternative means of defining the complex nature of the CWPF. Here we develop a daily water-limited crop model for this purpose. The model is based on the approach used in the FAO's AquaCrop model, balancing biophysical and computational complexities. We further develop the model by incorporating improved simulation routines to calculate the distribution of water through the soil profile. Consequently we obtain a more realistic representation of the soil water balance with concurrent improvements in the prediction of water-limited yield. We introduce a methodology to utilise this model for the generation of stochastic crop-water production functions (SCWPFs). This is achieved by running the model iteratively with both time series of climatic data and variable quantities of irrigation water, employing a realistic rule-based approach to farm irrigation scheduling. This methodology improves the representation of potential crop yields, capturing both the variable effects of water deficits on crop yield and the stochastic nature of the CWPF due to climatic variability. Application to
Atzori, A S; Tedeschi, L O; Cannas, A
2013-05-01
The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21
Karagiannis, Georgios Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
Effects of extrinsic mortality on the evolution of aging: a stochastic modeling approach.
Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam
2014-01-01
The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non
Economic policy optimization based on both one stochastic model and the parametric control theory
NASA Astrophysics Data System (ADS)
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
Kryvohuz, Maksym Mukamel, Shaul
2015-06-07
Generalized nonlinear response theory is presented for stochastic dynamical systems. Experiments in which multiple measurements of dynamical quantities are used along with multiple perturbations of parameters of dynamical systems are described by generalized response functions (GRFs). These constitute a new type of multidimensional measures of stochastic dynamics either in the time or the frequency domains. Closed expressions for GRFs in stochastic dynamical systems are derived and compared with numerical non-equilibrium simulations. Several types of perturbations are considered: impulsive and periodic perturbations of temperature and impulsive perturbations of coordinates. The present approach can be used to study various types of stochastic processes ranging from single-molecule conformational dynamics to chemical kinetics of finite-size reactors such as biocells.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.
Acceleration of stochastic seismic inversion in OpenCL-based heterogeneous platforms
NASA Astrophysics Data System (ADS)
Ferreirinha, Tomás; Nunes, Rúben; Azevedo, Leonardo; Soares, Amílcar; Pratas, Frederico; Tomás, Pedro; Roma, Nuno
2015-05-01
Seismic inversion is an established approach to model the geophysical characteristics of oil and gas reservoirs, being one of the basis of the decision making process in the oil&gas exploration industry. However, the required accuracy levels can only be attained by dealing and processing significant amounts of data, often leading to consequently long execution times. To overcome this issue and to allow the development of larger and higher resolution elastic models of the subsurface, a novel parallelization approach is herein proposed targeting the exploitation of GPU-based heterogeneous systems based on a unified OpenCL programming framework, to accelerate a state of art Stochastic Seismic Amplitude versus Offset Inversion algorithm. To increase the parallelization opportunities while ensuring model fidelity, the proposed approach is based on a careful and selective relaxation of some spatial dependencies. Furthermore, to take into consideration the heterogeneity of modern computing systems, usually composed of several and different accelerating devices, multi-device parallelization strategies are also proposed. When executed in a dual-GPU system, the proposed approach allows reducing the execution time in up to 30 times, without compromising the quality of the obtained models.
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
A stochastic approach for quantifying immigrant integration: the Spanish test case
NASA Astrophysics Data System (ADS)
Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia
2014-10-01
We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.
Consentaneous agent-based and stochastic model of the financial markets.
Gontis, Vygintas; Kononovicius, Aleksejus
2014-01-01
We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.
Consentaneous Agent-Based and Stochastic Model of the Financial Markets
Gontis, Vygintas; Kononovicius, Aleksejus
2014-01-01
We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation. PMID:25029364
Deng, Haishan; Xiang, Bingren; Liao, Xuewei; Xie, Shaofei
2006-12-01
A simple stochastic resonance algorithm based on linear modulation was developed to amplify and detect weak chromatographic peaks. The output chromatographic peak is often distorted when using the traditional stochastic resonance algorithm due to the presence of high levels of noise. In the new algorithm, a linear modulated double-well potential is introduced to correct for the distortion of the output peak. Method parameter selection is convenient and intuitive for linear modulation. In order to achieve a better signal-to-noise ratio for the output signal, the performance of two-layer stochastic resonance was evaluated by comparing it with wavelet-based stochastic resonance. The proposed algorithm was applied to the quantitative analysis of dimethyl sulfide and the determination of chloramphenicol residues in milk, and the good linearity of the method demonstrated that it is an effective tool for detecting weak chromatographic peaks.
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.
Wallace, Chris; Cutler, Antony J; Pontikos, Nikolas; Pekalski, Marcin L; Burren, Oliver S; Cooper, Jason D; García, Arcadio Rubio; Ferreira, Ricardo C; Guo, Hui; Walker, Neil M; Smyth, Deborah J; Rich, Stephen S; Onengut-Gumuscu, Suna; Sawcer, Stephen J; Ban, Maria; Richardson, Sylvia; Todd, John A; Wicker, Linda S
2015-06-01
Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD) and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS) and type 1 diabetes (T1D) associations in the IL-2RA (CD25) gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3) and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data.
Ground movement analysis based on stochastic medium theory.
Fei, Meng; Wu, Li-chun; Zhang, Jia-sheng; Deng, Guo-dong; Ni, Zhi-hui
2014-01-01
In order to calculate the ground movement induced by displacement piles driven into horizontal layered strata, an axisymmetric model was built and then the vertical and horizontal ground movement functions were deduced using stochastic medium theory. Results show that the vertical ground movement obeys normal distribution function, while the horizontal ground movement is an exponential function. Utilizing field measured data, parameters of these functions can be obtained by back analysis, and an example was employed to verify this model. Result shows that stochastic medium theory is suitable for calculating the ground movement in pile driving, and there is no need to consider the constitutive model of soil or contact between pile and soil. This method is applicable in practice. PMID:24701184
A wavelet approach for development and application of a stochastic parameter simulation system
NASA Astrophysics Data System (ADS)
Miron, Adrian
2001-07-01
In this research a Stochastic Parameter Simulation System (SPSS) computer program employing wavelet techniques was developed. The SPSS was designed to fulfill two key functional requirements: (1) To be able to analyze any steady state plant signal, decompose it into its deterministic and stochastic components, and then reconstruct a new, simulated signal that possesses exactly the same statistical noise characteristics as the actual signal; and (2) To be able to filter out the principal serially-correlated, deterministic components from the analyzed signal so that the remaining stochastic signal can be analyzed with signal validation tools that are designed for signals drawn from independent random distributions. The results obtained using SPSS were compared to those obtained using the Argonne National Laboratory Reactor Parameter Simulation System (RPSS) which uses a Fourier transform methodology to achieve the same objectives. RPSS and SPSS results were compared for three sets of stationary signals, representing sensor readings independently recorded at three nuclear power plants. For all of the recorded signals, the wavelet technique provided a better approximation of the original signal than the Fourier procedure. For each signal, many wavelet-based decompositions were found by the SPSS methodology, all of which produced white and normally distributed signal residuals. In most cases, the Fourier-based analysis failed to completely eliminate the original signal serial-correlation from the residuals. The reconstructed signals produced by SPSS are also statistically closer to the original signal than the RPSS reconstructed signal. Another phase of the research demonstrated that SPSS could be used to enhance the reliability of the Multivariate Sensor Estimation Technique (MSET). MSET uses the Sequential Probability Ratio Test (SPRT) for its fault detection algorithm. By eliminating the MSET residual serial-correlation in the MSET training phase, the SPRT user
Li, Xiao Ji, Guanghua Zhang, Hui
2015-02-15
We use the stochastic Cahn–Hilliard equation to simulate the phase transitions of the macromolecular microsphere composite (MMC) hydrogels under a random disturbance. Based on the Flory–Huggins lattice model and the Boltzmann entropy theorem, we develop a reticular free energy suit for the network structure of MMC hydrogels. Taking the random factor into account, with the time-dependent Ginzburg-Landau (TDGL) mesoscopic simulation method, we set up a stochastic Cahn–Hilliard equation, designated herein as the MMC-TDGL equation. The stochastic term in the equation is constructed appropriately to satisfy the fluctuation-dissipation theorem and is discretized on a spatial grid for the simulation. A semi-implicit difference scheme is adopted to numerically solve the MMC-TDGL equation. Some numerical experiments are performed with different parameters. The results are consistent with the physical phenomenon, which verifies the good simulation of the stochastic term.
NASA Astrophysics Data System (ADS)
Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.
2008-10-01
SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
A multi-objective stochastic approach to combinatorial technology space exploration
NASA Astrophysics Data System (ADS)
Patel, Chirag B.
Historically, aerospace development programs have frequently been marked by performance shortfalls, cost growth, and schedule slippage. New technologies included in systems are considered to be one of the major sources of this programmatic risk. Decisions regarding the choice of technologies to include in a design are therefore crucial for a successful development program. This problem of technology selection is a challenging exercise in multi-objective decision making. The complexity of this selection problem is compounded by the geometric growth of the combinatorial space with the number of technologies being considered and the uncertainties inherent in the knowledge of the technological attributes. These problems are not typically addressed in the selection methods employed in common practice. Consequently, a method is desired to aid the selection of technologies for complex systems design with consideration of the combinatorial complexity, multi-dimensionality, and the presence of uncertainties. Several categories of techniques are explored to address the shortcomings of current approaches and to realize the goal of an efficient and effective combinatorial technology space exploration method. For the multi-objective decision making, a posteriori preference articulation is implemented. To realize this, a stochastic algorithm for Pareto optimization is formulated based on the concepts of SPEA2. Techniques to address the uncertain nature of technology impact on the system are also examined. Monte Carlo simulations using the surrogate models are used for uncertainty quantification. The concepts of graph theory are used for modeling and analyzing compatibility constraints among technologies and assessing their impact on the technology combinatorial space. The overall decision making approach is enabled by the application of an uncertainty quantification technique under the framework of an efficient probabilistic Pareto optimization algorithm. As a result, multiple
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic
Water resources planning and management : A stochastic dual dynamic programming approach
NASA Astrophysics Data System (ADS)
Goor, Q.; Pinte, D.; Tilmant, A.
2008-12-01
Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.
2009-12-01
Fractures and fracture networks are the principal pathways for transport of water and contaminants in groundwater systems, enhanced geothermal system fluids, migration of oil and gas, carbon dioxide leakage from carbon sequestration sites, and of radioactive and toxic industrial wastes from underground storage repositories. A major issue to overcome when characterizing a fractured reservoir is that of data limitation due to accessibility and affordability. Moreover, the ability to map discontinuities in the rock with available geological and geophysical tools tends to decrease particularly as the scale of the discontinuity goes down. Geological characterization data include measurements of fracture density, orientation, extent, and aperture, and are based on analysis of outcrops, borehole optical and acoustic televiewer logs, aerial photographs, and core samples, among other techniques. All of these measurements are taken at the field scale through a very sparse limited number of deep boreholes. These types of data are often reduced to probability distribution functions for predictive modeling and simulation in a stochastic framework such as a stochastic discrete fracture network. Stochastic discrete fracture network models enable, through Monte Carlo realizations and simulations, probabilistic assessment of flow and transport phenomena that are not adequately captured using continuum models. Despite the fundamental uncertainties inherited within the probabilistic reduction of the sparse data collected, very little work has been conducted on quantifying uncertainty on the reduced probabilistic distribution functions. In the current study, using nested Monte Carlo simulations, we present the impact of parameter uncertainties of the distribution functions of fracture density, orientation, aperture and size on the flow and transport using topological measures such as fracture connectivity, physical characteristics such as effective hydraulic conductivity tensors, and
A stochastic context free grammar based framework for analysis of protein sequences
Dyrka, Witold; Nebel, Jean-Christophe
2009-01-01
Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free Grammar based framework has been
The impact of trade costs on rare earth exports : a stochastic frontier estimation approach.
Sanyal, Prabuddha; Brady, Patrick Vane; Vugrin, Eric D.
2013-09-01
The study develops a novel stochastic frontier modeling approach to the gravity equation for rare earth element (REE) trade between China and its trading partners between 2001 and 2009. The novelty lies in differentiating betweenbehind the border' trade costs by China and theimplicit beyond the border costs' of China's trading partners. Results indicate that the significance level of the independent variables change dramatically over the time period. While geographical distance matters for trade flows in both periods, the effect of income on trade flows is significantly attenuated, possibly capturing the negative effects of financial crises in the developed world. Second, the total export losses due tobehind the border' trade costs almost tripled over the time period. Finally, looking atimplicit beyond the border' trade costs, results show China gaining in some markets, although it is likely that some countries are substituting away from Chinese REE exports.
NASA Astrophysics Data System (ADS)
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J. Andrew
2016-08-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew
2016-08-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew
2016-08-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators
2016-01-01
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
ALLUVSIM: A program for event-based stochastic modeling of fluvial depositional systems
NASA Astrophysics Data System (ADS)
Pyrcz, M. J.; Boisvert, J. B.; Deutsch, C. V.
2009-08-01
This paper presents an algorithm for the construction of event-based fluvial models. The event-based approach may be applied to construct stochastic pseudo-process-based fluvial models for a variety of fluvial styles with conditioning to sparse well data (1-5 wells) and areal and vertical trends. The initial models are generated by placing large-scale features, such as channels and crevasse splays, into the model as geometric objects. These large-scale features are controlled by geometric input parameters provided by the user and are placed into the model to roughly honor well data through a rejection and updating method. Yet, some model to well data mismatch may still occur due to inconsistency in the size and positioning of complicated features relative to the well data. An image processing algorithm is used to post-process realizations to exactly honor all well data. The final, cell-based models, have no data mismatch and contain geologically realistic fluvial features that would be difficult to obtain with other pixel-based methods and precise conditioning that is difficult to obtain with object-based methods.
Hahl, Sayuri K.; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
Hahl, Sayuri K.; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
Hahl, Sayuri K; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
Hahl, Sayuri K; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches. PMID:25969148
NASA Astrophysics Data System (ADS)
Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.
2008-07-01
We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.
Billari, Francesco C; Graziani, Rebecca; Melilli, Eugenio
2014-10-01
This article suggests a procedure to derive stochastic population forecasts adopting an expert-based approach. As in previous work by Billari et al. (2012), experts are required to provide evaluations, in the form of conditional and unconditional scenarios, on summary indicators of the demographic components determining the population evolution: that is, fertility, mortality, and migration. Here, two main purposes are pursued. First, the demographic components are allowed to have some kind of dependence. Second, as a result of the existence of a body of shared information, possible correlations among experts are taken into account. In both cases, the dependence structure is not imposed by the researcher but rather is indirectly derived through the scenarios elicited from the experts. To address these issues, the method is based on a mixture model, within the so-called Supra-Bayesian approach, according to which expert evaluations are treated as data. The derived posterior distribution for the demographic indicators of interest is used as forecasting distribution, and a Markov chain Monte Carlo algorithm is designed to approximate this posterior. This article provides the questionnaire designed by the authors to collect expert opinions. Finally, an application to the forecast of the Italian population from 2010 to 2065 is proposed.
Stochastic approach to correlations beyond the mean field with the Skyrme interaction
Fukuoka, Y.; Nakatsukasa, T.; Funaki, Y.; Yabana, K.
2012-10-20
Large-scale calculation based on the multi-configuration Skyrme density functional theory is performed for the light N=Z even-even nucleus, {sup 12}C. Stochastic procedures and the imaginary-time evolution are utilized to prepare many Slater determinants. Each state is projected on eigenstates of parity and angular momentum. Then, performing the configuration mixing calculation with the Skyrme Hamiltonian, we obtain low-lying energy-eigenstates and their explicit wave functions. The generated wave functions are completely free from any assumption and symmetry restriction. Excitation spectra and transition probabilities are well reproduced, not only for the ground-state band, but for negative-parity excited states and the Hoyle state.
Chen, Bor-Sen; Chang, Yu-Te; Wang, Yu-Chao
2008-02-01
Molecular noises in gene networks come from intrinsic fluctuations, transmitted noise from upstream genes, and the global noise affecting all genes. Knowledge of molecular noise filtering in gene networks is crucial to understand the signal processing in gene networks and to design noise-tolerant gene circuits for synthetic biology. A nonlinear stochastic dynamic model is proposed in describing a gene network under intrinsic molecular fluctuations and extrinsic molecular noises. The stochastic molecular-noise-processing scheme of gene regulatory networks for attenuating these molecular noises is investigated from the nonlinear robust stabilization and filtering perspective. In order to improve the robust stability and noise filtering, a robust gene circuit design for gene networks is proposed based on the nonlinear robust H infinity stochastic stabilization and filtering scheme, which needs to solve a nonlinear Hamilton-Jacobi inequality. However, in order to avoid solving these complicated nonlinear stabilization and filtering problems, a fuzzy approximation method is employed to interpolate several linear stochastic gene networks at different operation points via fuzzy bases to approximate the nonlinear stochastic gene network. In this situation, the method of linear matrix inequality technique could be employed to simplify the gene circuit design problems to improve robust stability and molecular-noise-filtering ability of gene networks to overcome intrinsic molecular fluctuations and extrinsic molecular noises. PMID:18270080
A simplified BBGKY hierarchy for correlated fermions from a stochastic mean-field approach
NASA Astrophysics Data System (ADS)
Lacroix, Denis; Tanimura, Yusuke; Ayik, Sakir; Yilmaz, Bulent
2016-04-01
The stochastic mean-field (SMF) approach allows to treat correlations beyond mean-field using a set of independent mean-field trajectories with appropriate choice of fluctuating initial conditions. We show here that this approach is equivalent to a simplified version of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy between one-, two-, ..., N -body degrees of freedom. In this simplified version, one-body degrees of freedom are coupled to fluctuations to all orders while retaining only specific terms of the general BBGKY hierarchy. The use of the simplified BBGKY is illustrated with the Lipkin-Meshkov-Glick (LMG) model. We show that a truncated version of this hierarchy can be useful, as an alternative to the SMF, especially in the weak coupling regime to get physical insight in the effect beyond mean-field. In particular, it leads to approximate analytical expressions for the quantum fluctuations both in the weak and strong coupling regime. In the strong coupling regime, it can only be used for short time evolution. In that case, it gives information on the evolution time-scale close to a saddle point associated to a quantum phase-transition. For long time evolution and strong coupling, we observed that the simplified BBGKY hierarchy cannot be truncated and only the full SMF with initial sampling leads to reasonable results.
NASA Astrophysics Data System (ADS)
Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi
2014-08-01
We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with Ne>3000 electrons.
Attainability analysis in the stochastic sensitivity control
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2015-02-01
For nonlinear dynamic stochastic control system, we construct a feedback regulator that stabilises an equilibrium and synthesises a required dispersion of random states around this equilibrium. Our approach is based on the stochastic sensitivity functions technique. We focus on the investigation of attainability sets for 2-D systems. A detailed parametric description of the attainability domains for various types of control inputs for stochastic Brusselator is presented. It is shown that the new regulator provides a low level of stochastic sensitivity and can suppress oscillations of large amplitude.
NASA Astrophysics Data System (ADS)
Dong, Cong; Huang, Guohe; Tan, Qian; Cai, Yanpeng
2014-03-01
Water resources are fundamental for support of regional development. Effective planning can facilitate sustainable management of water resources to balance socioeconomic development and water conservation. In this research, coupled planning of water resources and agricultural land use was undertaken through the development of an inexact-stochastic programming approach. Such an inexact modeling approach was the integration of interval linear programming and chance-constraint programming methods. It was employed to successfully tackle uncertainty in the form of interval numbers and probabilistic distributions existing in water resource systems. Then it was applied to a typical regional water resource system for demonstrating its applicability and validity through generating efficient system solutions. Based on the process of modeling formulation and result analysis, the developed model could be used for helping identify optimal water resource utilization patterns and the corresponding agricultural land-use schemes in three sub-regions. Furthermore, a number of decision alternatives were generated under multiple water-supply conditions, which could help decision makers identify desired management policies.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.
Stochastic investigation of two-dimensional cross sections of rocks based on the climacogram
NASA Astrophysics Data System (ADS)
Kalamioti, Anna; Dimitriadis, Panayiotis; Tzouka, Katerina; Lerias, Eleutherios; Koutsoyiannis, Demetris
2016-04-01
The statistical properties of soil and rock formations are essential for the characterization of the porous medium geological structure as well as for the prediction of its transport properties in groundwater modelling. We investigate two-dimensional cross sections of rocks in terms of stochastic structure of its morphology quantified by the climacogram (i.e., variance of the averaged process vs. scale). The analysis is based both in microscale and macroscale data, specifically from Scanning Electron Microscope (SEM) pictures and from field photos, respectively. We identify and quantify the stochastic properties with emphasis on the large scale type of decay (exponentially or power type, else known as Hurst-Kolmogorov behaviour). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Passivity-based sliding mode control for a polytopic stochastic differential inclusion system.
Liu, Leipo; Fu, Zhumu; Song, Xiaona
2013-11-01
Passivity-based sliding mode control for a polytopic stochastic differential inclusion (PSDI) system is considered. A control law is designed such that the reachability of sliding motion is guaranteed. Moreover, sufficient conditions for mean square asymptotic stability and passivity of sliding mode dynamics are obtained by linear matrix inequalities (LMIs). Finally, two examples are given to illustrate the effectiveness of the proposed method.
Collignon, Bertrand; Séguret, Axel; Halloy, José
2016-01-01
Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
Evolutionary dynamics of imatinib-treated leukemic cells by stochastic approach
NASA Astrophysics Data System (ADS)
Pizzolato, Nicola; Valenti, Davide; Adorno, Dominique; Spagnolo, Bernardo
2009-09-01
The evolutionary dynamics of a system of cancerous cells in a model of chronic myeloid leukemia (CML) is investigated by a statistical approach. Cancer progression is explored by applying a Monte Carlo method to simulate the stochastic behavior of cell reproduction and death in a population of blood cells which can experience genetic mutations. In CML front line therapy is represented by the tyrosine kinase inhibitor imatinib which strongly affects the reproduction of leukemic cells only. In this work, we analyze the effects of a targeted therapy on the evolutionary dynamics of normal, first-mutant and cancerous cell populations. Several scenarios of the evolutionary dynamics of imatinib-treated leukemic cells are described as a consequence of the efficacy of the different modelled therapies. We show how the patient response to the therapy changes when a high value of the mutation rate from healthy to cancerous cells is present. Our results are in agreement with clinical observations. Unfortunately, development of resistance to imatinib is observed in a fraction of patients, whose blood cells are characterized by an increasing number of genetic alterations. We find that the occurrence of resistance to the therapy can be related to a progressive increase of deleterious mutations.
Li, Jing; He, Li; Lu, Hongwei; Fan, Xing
2014-08-30
An optimal design approach for groundwater remediation is developed through incorporating numerical simulation, health risk assessment, uncertainty analysis and nonlinear optimization within a general framework. Stochastic analysis and goal programming are introduced into the framework to handle uncertainties in real-world groundwater remediation systems. Carcinogenic risks associated with remediation actions are further evaluated at four confidence levels. The differences between ideal and predicted constraints are minimized by goal programming. The approach is then applied to a contaminated site in western Canada for creating a set of optimal remediation strategies. Results from the case study indicate that factors including environmental standards, health risks and technical requirements mutually affected and restricted themselves. Stochastic uncertainty existed in the entire process of remediation optimization, which should to be taken into consideration in groundwater remediation design.
High-order distance-based multiview stochastic learning in image classification.
Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng
2014-12-01
How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
Incompressible Limit for Compressible Fluids with Stochastic Forcing
NASA Astrophysics Data System (ADS)
Breit, Dominic; Feireisl, Eduard; Hofmanová, Martina
2016-11-01
We study the asymptotic behavior of the isentropic Navier-Stokes system driven by a multiplicative stochastic forcing in the compressible regime, where the Mach number approaches zero. Our approach is based on the recently developed concept of a weak martingale solution to the primitive system, uniform bounds derived from a stochastic analogue of the modulated energy inequality, and careful analysis of acoustic waves. A stochastic incompressible Navier-Stokes system is identified as the limit problem.
Control of confidence domains in the problem of stochastic attractors synthesis
Bashkirtseva, Irina
2015-03-10
A nonlinear stochastic control system is considered. We discuss a problem of the synthesis of stochastic attractors and suggest a constructive approach based on the design of the stochastic sensitivity and corresponding confidence domains. Details of this approach are demonstrated for the problem of the control of confidence ellipses near the equilibrium. An example of the control for stochastic Van der Pol equation is presented.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the
NASA Astrophysics Data System (ADS)
Kabamba, P. T.; Meerkov, S. M.; Ossareh, H. R.
2015-01-01
This paper considers feedback systems with asymmetric (i.e., non-odd functions) nonlinear actuators and sensors. While the stability of such systems can be investigated using the theory of absolute stability and its extensions, the current paper provides a method for their performance analysis, i.e., reference tracking and disturbance rejection. Similar to the case of symmetric nonlinearities considered in earlier work, the development is based on the method of stochastic linearisation (which is akin to the describing functions, but intended to study general properties of dynamics, rather than periodic regimes). Unlike the symmetric case, however, the nonlinearities considered here must be approximated not only by a quasilinear gain, but a quasilinear bias as well. This paper derives transcendental equations for the quasilinear gain and bias, provides necessary and sufficient conditions for existence of their solutions, and, using simulations, investigates the accuracy of these solutions as a tool for predicting the quality of reference tracking and disturbance rejection. The method developed is then applied to performance analysis of specific systems, and the effect of asymmetry on their behaviour is investigated. In addition, this method is used to justify the recently discovered phenomenon of noise-induced loss of tracking in feedback systems with PI controllers, anti-windup, and sensor noise.
NASA Astrophysics Data System (ADS)
Liu, Jie; Sun, Xingsheng; Han, Xu; Jiang, Chao; Yu, Dejie
2015-05-01
Based on the Gegenbauer polynomial expansion theory and regularization method, an analytical method is proposed to identify dynamic loads acting on stochastic structures. Dynamic loads are expressed as functions of time and random parameters in time domain and the forward model of dynamic load identification is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions of system. Random parameters are approximated through the random variables with λ-probability density function (PDFs) or their derivative PDFs. For this kind of random variables, Gegenbauer polynomial expansion is the unique correct choice to transform the problem of load identification for a stochastic structure into its equivalent deterministic system. Just via its equivalent deterministic system, the load identification problem of a stochastic structure can be solved by any available deterministic methods. With measured responses containing noise, the improved regularization operator is adopted to overcome the ill-posedness of load reconstruction and to obtain the stable and approximate solutions of certain inverse problems and the valid assessments of the statistics of identified loads. Numerical simulations demonstrate that with regard to stochastic structures, the identification and assessment of dynamic loads are achieved steadily and effectively by the presented method.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2015-09-01
Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids. PMID:25532191
Stochastic acid-based quenching in chemically amplified photoresists: a simulation study
NASA Astrophysics Data System (ADS)
Mack, Chris A.; Biafore, John J.; Smith, Mark D.
2011-04-01
BACKGROUND: The stochastic nature of acid-base quenching in chemically amplified photoresists leads to variations in the resulting acid concentration during post-exposure bake, which leads to line-edge roughness (LER) of the resulting features. METHODS: Using a stochastic resist simulator, we predicted the mean and standard deviation of the acid concentration after post-exposure bake for an open-frame exposure and fit the results to empirical expressions. RESULTS: The mean acid concentration after quenching can be predicted using the reaction-limited rate equation and an effective rate constant. The effective quenching rate constant is predicted by an empirical expression. A second empirical expression for the standard deviation of the acid concentration matched the output of the PROLITH stochastic resist model to within a few percent CONCLUSIONS: Predicting the stochastic uncertainty in acid concentration during post-exposure bake for 193-nm and extreme ultraviolet resists allows optimization of resist processing and formulations, and may form the basis of a comprehensive LER model.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2015-09-01
Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.
NASA Astrophysics Data System (ADS)
Murphy, Shane; Scala, Antonio; Lorito, Stefano; Herrero, Andre; Festa, Gaetano; Nielsen, Stefan; Trasatti, Elisa; Tonini, Roberto; Romano, Fabrizio; Molinari, Irene
2016-04-01
Stochastic slip modelling based on general scaling features with uniform slip probability over the fault plane is commonly employed in tsunami and seismic hazard. However, dynamic rupture effects driven by specific fault geometry and frictional conditions can potentially control the slip probability. Unfortunately dynamic simulations can be computationally intensive, preventing their extensive use for hazard analysis. The aim of this study is to produce a computationally efficient stochastic model that incorporates slip features observed in dynamic simulations. Dynamic rupture simulations are performed along a transect representing an average along-depth profile on the Tohoku subduction interface. The surrounding media, effective normal stress and friction law are simplified. Uncertainty in the nucleation location and pre-stress distribution are accounted for by using randomly located nucleation patches and stochastic pre-stress distributions for 500 simulations. The 1D slip distributions are approximated as moment magnitudes on the fault plane based on empirical scaling laws with the ensemble producing a magnitude range of 7.8 - 9.6. To measure the systematic spatial slip variation and its dependence on earthquake magnitude we introduce the concept of the Slip Probability density Function (SPF). We find that while the stochastic SPF is magnitude invariant, the dynamically derived SPF is magnitude-dependent and shows pronounced slip amplification near the surface for M > 8.6 events. To incorporate these dynamic features in the stochastic source models, we sub-divide the dynamically derived SPFs into 0.2 magnitude bins and compare them with the stochastic SPF in order to generate a depth and magnitude dependent transfer function. Applying this function to the traditional stochastic slip distribution allows for an approximated but efficient incorporation of regionally specific dynamic features in a modified source model, to be used specifically when a significant
Hybrid approaches for multiple-species stochastic reaction-diffusion models
NASA Astrophysics Data System (ADS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.
1993-01-01
Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.
Desynchronization of stochastically synchronized chemical oscillators
Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan
2015-12-15
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
Ultra-fast data-mining hardware architecture based on stochastic computing.
Morro, Antoni; Canals, Vincent; Oliver, Antoni; Alomar, Miquel L; Rossello, Josep L
2015-01-01
Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area.
Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.
2009-05-04
After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.
A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
Baars, Woutijn J.; Tinney, Charles E.
2014-05-15
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimation (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.
NASA Astrophysics Data System (ADS)
Maerker, Michael; Bolus, Michael
2014-05-01
We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less
Poisson-Vlasov in a strong magnetic field: A stochastic solution approach
Vilela Mendes, R.
2010-04-15
Stochastic solutions are obtained for the Maxwell-Vlasov equation in the approximation where magnetic field fluctuations are neglected and the electrostatic potential is used to compute the electric field. This is a reasonable approximation for plasmas in a strong external magnetic field. Both Fourier and configuration space solutions are constructed.
The design and testing of a first-order logic-based stochastic modeling language.
Pless, Daniel J.; Rammohan, Roshan; Chakrabarti, Chayan; Luger, George F.
2005-06-01
We have created a logic-based, Turing-complete language for stochastic modeling. Since the inference scheme for this language is based on a variant of Pearl's loopy belief propagation algorithm, we call it Loopy Logic. Traditional Bayesian networks have limited expressive power, basically constrained to finite domains as in the propositional calculus. Our language contains variables that can capture general classes of situations, events and relationships. A first-order language is also able to reason about potentially infinite classes and situations using constructs such as hidden Markov models(HMMs). Our language uses an Expectation-Maximization (EM) type learning of parameters. This has a natural fit with the Loopy Belief Propagation used for inference since both can be viewed as iterative message passing algorithms. We present the syntax and theoretical foundations for our Loopy Logic language. We then demonstrate three examples of stochastic modeling and diagnosis that explore the representational power of the language. A mechanical fault detection example displays how Loopy Logic can model time-series processes using an HMM variant. A digital circuit example exhibits the probabilistic modeling capabilities, and finally, a parameter fitting example demonstrates the power for learning unknown stochastic values.
NASA Astrophysics Data System (ADS)
Francipane, A.; Fatichi, S.; Ivanov, V. Y.; Noto, L. V.
2015-03-01
Hydrologic and geomorphic responses of watersheds to changes in climate are difficult to assess due to projection uncertainties and nonlinearity of the processes that are involved. Yet such assessments are increasingly needed and call for mechanistic approaches within a probabilistic framework. This study employs an integrated hydrology-geomorphology model, the Triangulated Irregular Network-based Real-time Integrated Basin Simulator (tRIBS)-Erosion, to analyze runoff and erosion sensitivity of seven semiarid headwater basins to projected climate conditions. The Advanced Weather Generator is used to produce two climate ensembles representative of the historic and future climate conditions for the Walnut Gulch Experimental Watershed located in the southwest U.S. The former ensemble incorporates the stochastic variability of the observed climate, while the latter includes the stochastic variability and the uncertainty of multimodel climate change projections. The ensembles are used as forcing for tRIBS-Erosion that simulates runoff and sediment basin responses leading to probabilistic inferences of future changes. The results show that annual precipitation for the area is generally expected to decrease in the future, with lower hourly intensities and similar daily rates. The smaller hourly rainfall generally results in lower mean annual runoff. However, a non-negligible probability of runoff increase in the future is identified, resulting from stochastic combinations of years with low and high runoff. On average, the magnitudes of mean and extreme events of sediment yield are expected to decrease with a very high probability. Importantly, the projected variability of annual sediment transport for the future conditions is comparable to that for the historic conditions, despite the fact that the former account for a much wider range of possible climate "alternatives." This result demonstrates that the historic natural climate variability of sediment yield is already so
Haron, Zaiton; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri
2015-01-01
Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces. PMID:25875019
Han, Lim Ming; Haron, Zaiton; Yahya, Khairulzan; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri
2015-01-01
Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces.
Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia
NASA Astrophysics Data System (ADS)
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2014-12-01
This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.
A stochastic analysis of a Brownian ratchet model for actin-based motility.
Qian, Hong
2004-12-01
In recent single-particle tracking (SPT) measurements on Listeria monocytogenes motility in cells [Kuo and McGrath (2000)], the actin-based stochastic dynamics of the bacterium movement has been analyzed statistically in terms of the mean-square displacement (MSD) of the trajectory. We present a stochastic analysis of a simplified polymerization Brownian ratchet (BR) model in which motions are limited by the bacterium movement. Analytical results are obtained and statistical data analyses are investigated. It is shown that the MSD of the stochastic bacterium movement is a monotonic quadratic function while the MSD for detrended trajectories is linear. Both the short-time relaxation and the long-time kinetics in terms the mean velocity and effective diffusion constant of the propelled bacterium are obtained from the MSD analysis. The MSD of the gap between actin tip and the bacterium exhibits an oscillatory behavior when there is a large resistant force from the bacterium. For comparison, a continuous diffusion formalism of the BR model with great analytical simplicity is also studied.
Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2014-12-04
This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.
Synchronization and stochastic resonance of the small-world neural network based on the CPG.
Lu, Qiang; Tian, Juan
2014-06-01
According to biological knowledge, the central nervous system controls the central pattern generator (CPG) to drive the locomotion. The brain is a complex system consisting of different functions and different interconnections. The topological properties of the brain display features of small-world network. The synchronization and stochastic resonance have important roles in neural information transmission and processing. In order to study the synchronization and stochastic resonance of the brain based on the CPG, we establish the model which shows the relationship between the small-world neural network (SWNN) and the CPG. We analyze the synchronization of the SWNN when the amplitude and frequency of the CPG are changed and the effects on the CPG when the SWNN's parameters are changed. And we also study the stochastic resonance on the SWNN. The main findings include: (1) When the CPG is added into the SWNN, there exists parameters space of the CPG and the SWNN, which can make the synchronization of the SWNN optimum. (2) There exists an optimal noise level at which the resonance factor Q gets its peak value. And the correlation between the pacemaker frequency and the dynamical response of the network is resonantly dependent on the noise intensity. The results could have important implications for biological processes which are about interaction between the neural network and the CPG.
Suboptimal stochastic controller for an n-body spacecraft
NASA Technical Reports Server (NTRS)
Larson, V.
1973-01-01
The problem is studied of determining a stochastic optimal controller for an n-body spacecraft. The approach used in obtaining the stochastic controller involves the application, interpretation, and combination of advanced dynamical principles and the theoretical aspects of modern control theory. The stochastic controller obtained for a complicated model of a spacecraft uses sensor angular measurements associated with the base body to obtain smoothed estimates of the entire state vector, can be easily implemented, and enables system performance to be significantly improved.
Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach
NASA Astrophysics Data System (ADS)
Krishnamurthy, Vikram; Rojas, Cristian R.
2014-12-01
This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.
Hwang, Beom Seuk; Chen, Zhen
2015-01-01
In estimating ROC curves of multiple tests, some a priori constraints may exist, either between the healthy and diseased populations within a test or between tests within a population. In this paper, we proposed an integrated modeling approach for ROC curves that jointly accounts for stochastic and variability orders. The stochastic order constrains the distributional centers of the diseased and healthy populations within a test, while the variability order constrains the distributional spreads of the tests within each of the populations. Under a Bayesian nonparametric framework, we used features of the Dirichlet process mixture to incorporate these order constraints in a natural way. We applied the proposed approach to data from the Physician Reliability Study that investigated the accuracy of diagnosing endometriosis using different clinical information. To address the issue of no gold standard in the real data, we used a sensitivity analysis approach that exploited diagnosis from a panel of experts. To demonstrate the performance of the methodology, we conducted simulation studies with varying sample sizes, distributional assumptions and order constraints. Supplementary materials for this article are available online. PMID:26839441
A method based on stochastic resonance for the detection of weak analytical signal.
Wu, Xiaojing; Guo, Weiming; Cai, Wensheng; Shao, Xueguang; Pan, Zhongxiao
2003-12-23
An effective method for detection of weak analytical signals with strong noise background is proposed based on the theory of stochastic resonance (SR). Compared with the conventional SR-based algorithms, the proposed algorithm is simplified by changing only one parameter to realize the weak signal detection. Simulation studies revealed that the method performs well in detection of analytical signals in very high level of noise background and is suitable for detecting signals with the different noise level by changing the parameter. Applications of the method to experimental weak signals of X-ray diffraction and Raman spectrum are also investigated. It is found that reliable results can be obtained.
Drummond, Jennifer D; Davies-Colley, Robert J; Stott, Rebecca; Sukias, James P; Nagels, John W; Sharp, Alice; Packman, Aaron I
2015-07-01
Long-term survival of pathogenic microorganisms in streams enables long-distance disease transmission. In order to manage water-borne diseases more effectively we need to better predict how microbes behave in freshwater systems, particularly how they are transported downstream in rivers. Microbes continuously immobilize and resuspend during downstream transport owing to a variety of processes including gravitational settling, attachment to in-stream structures such as submerged macrophytes, and hyporheic exchange and filtration within underlying sediments. We developed a stochastic model to describe these microbial transport and retention processes in rivers that also accounts for microbial inactivation. We used the model to assess the transport, retention, and inactivation of Escherichia coli in a small stream and the underlying streambed sediments as measured from multitracer injection experiments. The results demonstrate that the combination of laboratory experiments on sediment cores, stream reach-scale tracer experiments, and multiscale stochastic modeling improves assessment of microbial transport in streams. This study (1) demonstrates new observations of microbial dynamics in streams with improved data quality than prior studies, (2) advances a stochastic modeling framework to include microbial inactivation processes that we observed to be important in these streams, and (3) synthesizes new and existing data to evaluate seasonal dynamics.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
NASA Astrophysics Data System (ADS)
Castillo-Garit, Juan A.; Martinez-Santiago, Oscar; Marrero-Ponce, Yovani; Casañola-Martín, Gerardo M.; Torrens, Francisco
2008-10-01
The recently introduced bilinear indices are applied to the QSAR/QSPR studies of heteroatomic molecules. These novel atom-based molecular fingerprints are used to predict the boiling point of 28 alkyl-alcohols and partition coefficient, specific rate constant and antibacterial activity of 34 2-furylethylenes derivatives. The obtained models are statistically significant and show rather very good stability in a cross-validation experiment. The comparison with other approaches exposes a good behavior of our method in this QSPR studies. The obtained results suggest that with the present method, it is possible to obtain a good estimation of physical, chemical and physicochemical properties for organic compounds.
Wang, Huanqing; Liu, Kefu; Liu, Xiaoping; Chen, Bing; Lin, Chong
2015-09-01
In this paper, we consider the problem of observer-based adaptive neural output-feedback control for a class of stochastic nonlinear systems with nonstrict-feedback structure. To overcome the design difficulty from the nonstrict-feedback structure, a variable separation approach is introduced by using the monotonically increasing property of system bounding functions. On the basis of the state observer, and by combining the adaptive backstepping technique with radial basis function neural networks' universal approximation capability, an adaptive neural output feedback control algorithm is presented. It is shown that the proposed controller can guarantee that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded in the sense of mean quartic value. Simulation results are provided to show the effectiveness of the proposed control scheme.
NASA Astrophysics Data System (ADS)
Cross, David; Onof, Christian; Bernardara, Pietro
2016-04-01
With the COP21 drawing to a close in December 2015, storms Desmond, Eva and Frank which swept across the UK and Ireland causing widespread flooding and devastation have acted as a timely reminder of the need for reliable estimation of rainfall extremes in a changing climate. The frequency and intensity of rainfall extremes are predicted to increase in the UK under anthropogenic climate change, and it is notable that the UK's 24 hour rainfall record of 316mm set in Seathwaite, Cumbria in 2009 was broken on the 5 December 2015 with 341mm by storm Desmond at Honister Pass also in Cumbria. Immediate analysis of the latter by the Centre for Ecology and Hydrology (UK) on the 8 December 2015 estimated that this is approximately equivalent to a 1300 year return period event (Centre for Ecology & Hydrology, 2015). Rainfall extremes are typically estimated using extreme value analysis and intensity duration frequency curves. This study investigates the potential for using stochastic rainfall simulation with mechanistic rectangular pulse models for estimation of extreme rainfall. These models have been used since the late 1980s to generate synthetic rainfall time-series at point locations for scenario analysis in hydrological studies and climate impact assessment at the catchment scale. Routinely they are calibrated to the full historical hyetograph and used for continuous simulation. However, their extremal performance is variable with a tendency to underestimate short duration (hourly and sub-hourly) rainfall extremes which are often associated with heavy convective rainfall in temporal climates such as the UK. Focussing on hourly and sub-hourly rainfall, a censored modelling approach is proposed in which rainfall below a low threshold is set to zero prior to model calibration. It is hypothesised that synthetic rainfall time-series are poor at estimating extremes because the majority of the training data are not representative of the climatic conditions which give rise to
Stochastic differential games with inside information
NASA Astrophysics Data System (ADS)
Draouil, Olfa; Øksendal, Bernt
2016-08-01
We study stochastic differential games of jump diffusions, where the players have access to inside information. Our approach is based on anticipative stochastic calculus, white noise, Hida-Malliavin calculus, forward integrals and the Donsker delta functional. We obtain a characterization of Nash equilibria of such games in terms of the corresponding Hamiltonians. This is used to study applications to insider games in finance, specifically optimal insider consumption and optimal insider portfolio under model uncertainty.
NASA Astrophysics Data System (ADS)
Llopis-Albert, Carlos; Merigó, José M.; Xu, Yejun
2016-09-01
This paper presents an alternative approach to deal with seawater intrusion problems, that overcomes some of the limitations of previous works, by coupling the well-known SWI2 package for MODFLOW with a stochastic inverse model named GC method. On the one hand, the SWI2 allows a vertically integrated variable-density groundwater flow and seawater intrusion in coastal multi-aquifer systems, and a reduction in number of required model cells and the elimination of the need to solve the advective-dispersive transport equation, which leads to substantial model run-time savings. On the other hand, the GC method allows dealing with groundwater parameter uncertainty by constraining stochastic simulations to flow and mass transport data (i.e., hydraulic conductivity, freshwater heads, saltwater concentrations and travel times) and also to secondary information obtained from expert judgment or geophysical surveys, thus reducing uncertainty and increasing reliability in meeting the environmental standards. The methodology has been successfully applied to a transient movement of the freshwater-seawater interface in response to changing freshwater inflow in a two-aquifer coastal aquifer system, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques. The approach also allows partially overcoming the neglected diffusion and dispersion processes after the conditioning process since the uncertainty is reduced and results are closer to available data.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna
2016-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Definition of scarcity-based water pricing policies through hydro-economic stochastic programming
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury
2014-05-01
One of the greatest current issues in integrated water resources management is to find and apply efficient and flexible management policies. Efficient management is needed to deal with increased water scarcity and river basin closure. Flexible policies are required to handle the stochastic nature of the water cycle. Scarcity-based pricing policies are one of the most promising alternatives, which deal not only with the supply costs, but also consider the opportunity costs associated with the allocation of water. The opportunity cost of water, which varies dynamically with space and time according to the imbalances between supply and demand, can be assessed using hydro-economic models. This contribution presents a procedure to design a pricing policy based on hydro-economic modelling and on the assessment of the Marginal Resource Opportunity Cost (MROC). Firstly, MROC time series associated to the optimal operation of the system are derived from a stochastic hydro-economic model. Secondly, these MROC time series must be post-processed in order to combine the different space-and-time MROC values into a single generalized indicator of the marginal opportunity cost of water. Finally, step scarcity-based pricing policies are determined after establishing a relationship between the MROC and the corresponding state of the system at the beginning of the time period (month). The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series and four agricultural demand sites currently managed using historical (XIVth century) rights. A hydro-economic model of the system has been built using stochastic dynamic programming. A reoptimization procedure is then implemented using SDP-derived benefit-to-go functions and historical flows to produce the time series of MROC values. MROC values are then aggregated and a statistical analysis is carried out to define (i) pricing policies and (ii) the relationship between MROC and
Multimode fiber laser beam cleanup based on stochastic parallel gradient descent algorithm
NASA Astrophysics Data System (ADS)
Zhao, Hai-Chuan; Ma, Hao-Tong; Zhou, Pu; Wang, Xiao-Lin; Ma, Yan-Xing; Li, Xiao; Xu, Xiao-Jun; Zhao, Yi-Jun
2011-01-01
We present experimental research on multimode fiber laser beam cleanup based on a stochastic parallel gradient descent (SPGD) algorithm. The multimode laser is obtained by injecting a 1064 nm central wavelength single mode fiber laser into a multimode fiber and the system is setup by using phase only liquid crystal spatial light modulators (LC-SLM). The quality evaluation function is increased by a factor of 10.5 and 65% of the laser energy is encircled in the central lobe when the system evolves from open-loop into close-loop state. Experimental results indicate the feasibility of the multimode fiber laser beam cleanup by adaptive optics (AO).
Li, Chaojie; Yu, Wenwu; Huang, Tingwen
2014-06-01
In this paper, a novel impulsive control law is proposed for synchronization of stochastic discrete complex networks with time delays and switching topologies, where average dwell time and average impulsive interval are taken into account. The side effect of time delays is estimated by Lyapunov-Razumikhin technique, which quantitatively gives the upper bound to increase the rate of Lyapunov function. By considering the compensation of decreasing interval, a better impulsive control law is recast in terms of average dwell time and average impulsive interval. Detailed results from a numerical illustrative example are presented and discussed. Finally, some relevant conclusions are drawn.
Mental stress in Ireland, 1994-2000: a stochastic dominance approach.
Madden, David
2009-10-01
The General Health Questionnaire (GHQ) is frequently used as a measure of mental well-being with those people with values below a certain threshold regarded as suffering from mental stress. Comparison of mental stress levels across populations may then be sensitive to the chosen threshold. This paper uses stochastic dominance techniques to show that mental stress fell in Ireland over the 1994-2000 period regardless of the threshold chosen. Decomposition techniques suggest that changes in the proportion unemployed and in the protective effect of income, education and marital status upon mental health were the principal factors underlying this fall.
van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes
2014-01-01
The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and
van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes
2014-01-01
The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and
Zhang, Qin Fen; Karney, Professor Byran W.; Suo, Prof. Lisheng; Colombo, Dr. Andrew
2011-01-01
Abstract: The randomness of transient events, and the variability in factors which influence the magnitudes of resultant pressure fluctuations, ensures that waterhammer and surges in a pressurized pipe system are inherently stochastic. To bolster and improve reliability-based structural design, a stochastic model of transient pressures is developed for water conveyance systems in hydropower plants. The statistical characteristics and probability distributions of key factors in boundary conditions, initial states and hydraulic system parameters are analyzed based on a large record of observed data from hydro plants in China; and then the statistical characteristics and probability distributions of annual maximum waterhammer pressures are simulated using Monte Carlo method and verified by the analytical probabilistic model for a simplified pipe system. In addition, the characteristics (annual occurrence, sustaining period and probability distribution) of hydraulic loads for both steady and transient states are discussed. Illustrating with an example of penstock structural design, it is shown that the total waterhammer pressure should be split into two individual random variable loads: the steady/static pressure and the waterhammer pressure rise during transients; and that different partial load factors should be applied to each individual load to reflect its unique physical and stochastic features. Particularly, the normative load (usually the unfavorable value at 95-percentage point) for steady/static hydraulic pressure should be taken from the probability distribution of its maximum values during the pipe's design life, while for waterhammer pressure rise, as the second variable load, the probability distribution of its annual maximum values is used to determine its normative load.
NASA Astrophysics Data System (ADS)
Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.
2009-04-01
Drought is one of the major weather driven natural hazards, which has most harm impacts on environment, agricultural and hydrological factors than the other hazards. In spite of the fact that Hungary - that country is situated in Central Europe - belongs to the continental climate zone (influenced by Atlantic and Mediterranean streams) and this weather conditions should be favourable for agricultural production, the drought is a serious risk factor in Hungary, especially on the so called "Great Hungarian Plain", which area has been hit by severe drought events. These drought events encouraged the Ministry of Environment and Water of Hungary to embark on a countrywide drought planning programme to coordinate drought planning efforts throughout the country, to ensure that available water is used efficiently and to provide guidance on how drought planning can be accomplished. With regard to this plan, it is indispensable to analyze the regional drought frequency and duration in the target region of the programme as fundamental information for the further works. According to these aims, first we initiated a methodological development for simulating drought in a non-contributing area. As a result of this work, it has been agreed that the most appropriate model structure for our purposes using a spatially distributed physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model embedded into a Markov Chain-Monte Carlo (MCMC) algorithm for estimate multi-year drought frequency and duration. In this framework: - the spatially distributed SVAT component simulates all the fundamental SVAT processes (such as: interception, snow-accumulation and melting, infiltration, water uptake by vegetation and evapotranspiration, vertical and horizontal distribution of soil moisture, etc.) taking the groundwater table as lower, and the hydrometeorological fields as upper boundary conditions into account; - and the MCMC based stochastic component generates time series of daily weather
Reboul, Cyril F; Bonnet, Frederic; Elmlund, Dominika; Elmlund, Hans
2016-06-01
A critical step in the analysis of novel cryogenic electron microscopy (cryo-EM) single-particle datasets is the identification of homogeneous subsets of images. Methods for solving this problem are important for data quality assessment, ab initio 3D reconstruction, and analysis of population diversity due to the heterogeneous nature of macromolecules. Here we formulate a stochastic algorithm for identification of homogeneous subsets of images. The purpose of the method is to generate improved 2D class averages that can be used to produce a reliable 3D starting model in a rapid and unbiased fashion. We show that our method overcomes inherent limitations of widely used clustering approaches and proceed to test the approach on six publicly available experimental cryo-EM datasets. We conclude that, in each instance, ab initio 3D reconstructions of quality suitable for initialization of high-resolution refinement are produced from the cluster centers. PMID:27184214
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-03-01
Many real-life systems, such as computer systems, manufacturing systems and logistics systems, are modelled as stochastic-flow networks (SFNs) to evaluate network reliability. Here, network reliability, defined as the probability that the network successfully transmits d units of data/commodity from an origin to a destination, is a performance indicator of the systems. Network reliability maximization is a particular objective, but is costly for many system supervisors. This article solves the multi-objective problem of reliability maximization and cost minimization by finding the optimal component assignment for SFN, in which a set of multi-state components is ready to be assigned to the network. A two-stage approach integrating Non-dominated Sorting Genetic Algorithm II and simple additive weighting are proposed to solve this problem, where network reliability is evaluated in terms of minimal paths and recursive sum of disjoint products. Several practical examples related to computer networks are utilized to demonstrate the proposed approach.
Xu, Hao; Jagannathan, Sarangapani
2015-03-01
The stochastic optimal control of nonlinear networked control systems (NNCSs) using neuro-dynamic programming (NDP) over a finite time horizon is a challenging problem due to terminal constraints, system uncertainties, and unknown network imperfections, such as network-induced delays and packet losses. Since the traditional iteration or time-based infinite horizon NDP schemes are unsuitable for NNCS with terminal constraints, a novel time-based NDP scheme is developed to solve finite horizon optimal control of NNCS by mitigating the above-mentioned challenges. First, an online neural network (NN) identifier is introduced to approximate the control coefficient matrix that is subsequently utilized in conjunction with the critic and actor NNs to determine a time-based stochastic optimal control input over finite horizon in a forward-in-time and online manner. Eventually, Lyapunov theory is used to show that all closed-loop signals and NN weights are uniformly ultimately bounded with ultimate bounds being a function of initial conditions and final time. Moreover, the approximated control input converges close to optimal value within finite time. The simulation results are included to show the effectiveness of the proposed scheme. PMID:25720004
Stochastic regularization operators on unstructured meshes
NASA Astrophysics Data System (ADS)
Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan
2016-04-01
Most geophysical inverse problems require the solution of underdetermined systems of equations. In order to solve such inverse problems, appropriate regularization is required. Ideally, this regularization includes information on the expected model variability and spatial correlation. Based on geostatistical covariance functions, which can be adapted to the specific situation, stochastic regularization can be used to add auxiliary constraints to the given inverse problem. Stochastic regularization operators have been successfully applied to geophysical inverse problems formulated on regular grids. Here, we demonstrate the calculation of stochastic regularization operators for unstructured meshes. Unstructured meshes are advantageous with regards to incorporating arbitrary topography, undulating geological interfaces and complex acquisition geometries into the inversion. However, compared to regular grids, unstructured meshes have variable cell sizes, complicating the calculation of stochastic operators. The stochastic operators proposed here are based on a 2D exponential correlation function, allowing to predefine spatial correlation lengths. The regularization thus acts over an imposed correlation length rather than only taking into account neighbouring cells as in regular smoothing constraints. Correlation over a spatial length partly removes the effects of variable cell sizes of unstructured meshes on the regularization. Synthetic models having large-scale interfaces as well as small-scale stochastic variations are used to analyse the performance and behaviour of the stochastic regularization operators. The resulting inverted models obtained with stochastic regularization are compare against the results of standard regularization approaches (damping and smoothing). Besides using stochastic operators for regularization, we plan to incorporate the footprint of the stochastic operator in further applications such as the calculation of the cross-gradient functions
Detailed numerical investigation of the dissipative stochastic mechanics based neuron model.
Güler, Marifi
2008-10-01
Recently, a physical approach for the description of neuronal dynamics under the influence of ion channel noise was proposed in the realm of dissipative stochastic mechanics (Güler, Phys Rev E 76:041918, 2007). Led by the presence of a multiple number of gates in an ion channel, the approach establishes a viewpoint that ion channels are exposed to two kinds of noise: the intrinsic noise, associated with the stochasticity in the movement of gating particles between the inner and the outer faces of the membrane, and the topological noise, associated with the uncertainty in accessing the permissible topological states of open gates. Renormalizations of the membrane capacitance and of a membrane voltage dependent potential function were found to arise from the mutual interaction of the two noisy systems. The formalism therein was scrutinized using a special membrane with some tailored properties giving the Rose-Hindmarsh dynamics in the deterministic limit. In this paper, the resultant computational neuron model of the above approach is investigated in detail numerically for its dynamics using time-independent input currents. The following are the major findings obtained. The intrinsic noise gives rise to two significant coexisting effects: it initiates spiking activity even in some range of input currents for which the corresponding deterministic model is quiet and causes bursting in some other range of input currents for which the deterministic model fires tonically. The renormalization corrections are found to augment the above behavioral transitions from quiescence to spiking and from tonic firing to bursting, and, therefore, the bursting activity is found to take place in a wider range of input currents for larger values of the correction coefficients. Some findings concerning the diffusive behavior in the voltage space are also reported.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz
2015-01-01
Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5 Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5 Hz).
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
The transition between strong and weak chaos in delay systems: Stochastic modeling approach
NASA Astrophysics Data System (ADS)
Jüngling, Thomas; D'Huys, Otti; Kinzel, Wolfgang
2015-06-01
We investigate the scaling behavior of the maximal Lyapunov exponent in chaotic systems with time delay. In the large-delay limit, it is known that one can distinguish between strong and weak chaos depending on the delay scaling, analogously to strong and weak instabilities for steady states and periodic orbits. Here we show that the Lyapunov exponent of chaotic systems shows significant differences in its scaling behavior compared to constant or periodic dynamics due to fluctuations in the linearized equations of motion. We reproduce the chaotic scaling properties with a linear delay system with multiplicative noise. We further derive analytic limit cases for the stochastic model illustrating the mechanisms of the emerging scaling laws.
Salazar-Cavazos, Emanuel; Santillán, Moisés
2014-02-01
In this work, we develop a detailed, stochastic, dynamical model for the tryptophan operon of E. coli, and estimate all of the model parameters from reported experimental data. We further employ the model to study the system performance, considering the amount of biochemical noise in the trp level, the system rise time after a nutritional shift, and the amount of repressor molecules necessary to maintain an adequate level of repression, as indicators of the system performance regime. We demonstrate that the level of cooperativity between repressor molecules bound to the first two operators in the trp promoter affects all of the above enlisted performance characteristics. Moreover, the cooperativity level found in the wild-type bacterial strain optimizes a cost-benefit function involving low biochemical noise in the tryptophan level, short rise time after a nutritional shift, and low number of regulatory molecules. PMID:24307084
Getting a stochastic process from a conservative Lagrangian: A first approach
NASA Astrophysics Data System (ADS)
Ramírez, J. E.; Herrera, J. N.; Martínez, M. I.
2016-04-01
The transition probability PV for a stochastic process generated by a conservative Lagrangian L =L0 - εV is obtained at first order from a perturbation series found using a path integral. This PV corresponds to the transition probability for a random walk with a probability density given by the sum of a normal distribution and a perturbation which may be understood as the contribution of the interaction of the random walk with the external field. It is also found that the moment-generating function for PV can be expressed as the generating function of a normal distribution modified by a perturbation. Applications of these results to a linear potential, a harmonic oscillator potential, and an exponentially decaying potential are shown.
Solution of stochastic media transport problems using a numerical quadrature-based method
Pautz, S. D.; Franke, B. C.; Prinja, A. K.; Olson, A. J.
2013-07-01
We present a new conceptual framework for analyzing transport problems in random media. We decompose such problems into stratified subproblems according to the number of material pseudo-interfaces within realizations. For a given subproblem we assign pseudo-interface locations in each realization according to product quadrature rules, which allows us to deterministically generate a fixed number of realizations. Quadrature integration of the solutions of these realizations thus approximately solves each subproblem; the weighted superposition of solutions of the subproblems approximately solves the general stochastic media transport problem. We revisit some benchmark problems to determine the accuracy and efficiency of this approach in comparison to randomly generated realizations. We find that this method is very accurate and fast when the number of pseudo-interfaces in a problem is generally low, but that these advantages quickly degrade as the number of pseudo-interfaces increases. (authors)
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
2016-01-01
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total cost of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.
Correlated noise-based switches and stochastic resonance in a bistable genetic regulation system
NASA Astrophysics Data System (ADS)
Wang, Can-Jun; Yang, Ke-Li
2016-07-01
The correlated noise-based switches and stochastic resonance are investigated in a bistable single gene switching system driven by an additive noise (environmental fluctuations), a multiplicative noise (fluctuations of the degradation rate). The correlation between the two noise sources originates from on the lysis-lysogeny pathway system of the λ phage. The steady state probability distribution is obtained by solving the time-independent Fokker-Planck equation, and the effects of noises are analyzed. The effects of noises on the switching time between the two stable states (mean first passage time) is investigated by the numerical simulation. The stochastic resonance phenomenon is analyzed by the power amplification factor. The results show that the multiplicative noise can induce the switching from "on" → "off" of the protein production, while the additive noise and the correlation between the noise sources can induce the inverse switching "off" → "on". A nonmonotonic behaviour of the average switching time versus the multiplicative noise intensity, for different cross-correlation and additive noise intensities, is observed in the genetic system. There exist optimal values of the additive noise, multiplicative noise and cross-correlation intensities for which the weak signal can be optimal amplified.
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
2016-01-01
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less
Localization of nonlinear damage using state-space-based predictions under stochastic excitation
NASA Astrophysics Data System (ADS)
Liu, Gang; Mao, Zhu; Todd, Michael; Huang, Zongming
2014-02-01
This paper presents a study on localizing damage under stochastic excitation by state-space-based methods, where the damaged response contains some nonlinearity. Two state-space-based modeling algorithms, namely auto- and cross-predictions, are employed in this paper, and the greatest prediction error will be achieved at the sensor pair closest to the actual damage, in terms of localization. To quantify the distinction of prediction error distributions obtained at different sensor locations, the Bhattacharyya distance is adopted as the quantification metric. There are two lab-scale test-beds adopted as validation platforms, including a two-story plane steel frame with bolt loosening damage and a three-story benchmark aluminum frame with a simulated tunable crack. Band-limited Gaussian noise is applied through an electrodynamic shaker to the systems. Testing results indicate that the damage detection capability of the state-space-based method depends on the nonlinearity-induced high frequency responses. Since those high frequency components attenuate quickly in time and space, the results show great capability for damage localization, i.e., the highest deviation of Bhattacharyya distance is coincident with the sensors close to the physical damage location. This work extends the state-space-based damage detection method for localizing damage to a stochastically excited scenario, which provides the advantage of compatibility with ambient excitations. Moreover, results from both experiments indicate that the state-space-based method is only sensitive to nonlinearity-induced damage, thus it can be utilized in parallel with linear classifiers or normalization strategies to insulate the operational and environmental variability, which often affects the system response in a linear fashion.
2016-01-01
In today's world, Public expenditures on health are one of the most important issues for governments. These increased expenditures are putting pressure on public budgets. Therefore, health policy makers have focused on the performance of their health systems and many countries have introduced reforms to improve the performance of their health systems. This study investigates the most important determinants of healthcare efficiency for OECD countries using second stage approach for Bayesian Stochastic Frontier Analysis (BSFA). There are two steps in this study. First we measure 29 OECD countries' healthcare efficiency by BSFA using the data from the OECD Health Database. At second stage, we expose the multiple relationships between the healthcare efficiency and characteristics of healthcare systems across OECD countries using Bayesian beta regression. PMID:27118987
Bao, Haibo; Park, Ju H; Cao, Jinde
2016-01-01
This paper deals with the exponential synchronization of coupled stochastic memristor-based neural networks with probabilistic time-varying delay coupling and time-varying impulsive delay. There is one probabilistic transmittal delay in the delayed coupling that is translated by a Bernoulli stochastic variable satisfying a conditional probability distribution. The disturbance is described by a Wiener process. Based on Lyapunov functions, Halanay inequality, and linear matrix inequalities, sufficient conditions that depend on the probability distribution of the delay coupling and the impulsive delay were obtained. Numerical simulations are used to show the effectiveness of the theoretical results.
NASA Astrophysics Data System (ADS)
Zhong, Kai; Zhu, Song; Yang, Qiqi
2016-11-01
In recent years, the stability problems of memristor-based neural networks have been studied extensively. This paper not only takes the unavoidable noise into consideration but also investigates the global exponential stability of stochastic memristor-based neural networks with time-varying delays. The obtained criteria are essentially new and complement previously known ones, which can be easily validated with the parameters of system itself. In addition, the study of the nonlinear dynamics for the addressed neural networks may be helpful in qualitative analysis for general stochastic systems. Finally, two numerical examples are provided to substantiate our results.
Stochastic Dynamical Model of a Growing Citation Network Based on a Self-Exciting Point Process
NASA Astrophysics Data System (ADS)
Golosovsky, Michael; Solomon, Sorin
2012-08-01
We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40 195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.
Stochastic dynamical model of a growing citation network based on a self-exciting point process.
Golosovsky, Michael; Solomon, Sorin
2012-08-31
We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40,195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions. PMID:23002894
Jiang, Kuosheng; Xu, Guanghua; Liang, Lin; Tao, Tangfei; Gu, Fengshou
2014-07-29
In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
Adaptive stochastic cellular automata: Applications
NASA Astrophysics Data System (ADS)
Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.
1990-09-01
The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.
Modeling stochasticity in biochemical reaction networks
NASA Astrophysics Data System (ADS)
Constantino, P. H.; Vlysidis, M.; Smadbeck, P.; Kaznessis, Y. N.
2016-03-01
Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts.
Stochastic approach to diffusion inside the chaotic layer of a resonance.
Mestre, Martín F; Bazzani, Armando; Cincotta, Pablo M; Giordano, Claudia M
2014-01-01
We model chaotic diffusion in a symplectic four-dimensional (4D) map by using the result of a theorem that was developed for stochastically perturbed integrable Hamiltonian systems. We explicitly consider a map defined by a free rotator (FR) coupled to a standard map (SM). We focus on the diffusion process in the action I of the FR, obtaining a seminumerical method to compute the diffusion coefficient. We study two cases corresponding to a thick and a thin chaotic layer in the SM phase space and we discuss a related conjecture stated in the past. In the first case, the numerically computed probability density function for the action I is well interpolated by the solution of a Fokker-Planck (FP) equation, whereas it presents a nonconstant time shift with respect to the concomitant FP solution in the second case suggesting the presence of an anomalous diffusion time scale. The explicit calculation of a diffusion coefficient for a 4D symplectic map can be useful to understand the slow diffusion observed in celestial mechanics and accelerator physics. PMID:24580301
Stochastic approach to diffusion inside the chaotic layer of a resonance.
Mestre, Martín F; Bazzani, Armando; Cincotta, Pablo M; Giordano, Claudia M
2014-01-01
We model chaotic diffusion in a symplectic four-dimensional (4D) map by using the result of a theorem that was developed for stochastically perturbed integrable Hamiltonian systems. We explicitly consider a map defined by a free rotator (FR) coupled to a standard map (SM). We focus on the diffusion process in the action I of the FR, obtaining a seminumerical method to compute the diffusion coefficient. We study two cases corresponding to a thick and a thin chaotic layer in the SM phase space and we discuss a related conjecture stated in the past. In the first case, the numerically computed probability density function for the action I is well interpolated by the solution of a Fokker-Planck (FP) equation, whereas it presents a nonconstant time shift with respect to the concomitant FP solution in the second case suggesting the presence of an anomalous diffusion time scale. The explicit calculation of a diffusion coefficient for a 4D symplectic map can be useful to understand the slow diffusion observed in celestial mechanics and accelerator physics.
Stochastic parametrization of multiscale processes using a dual-grid approach.
Shutts, Glenn; Allen, Thomas; Berner, Judith
2008-07-28
Some speculative proposals are made for extending current stochastic sub-gridscale parametrization methods using the techniques adopted from the field of computer graphics and flow visualization. The idea is to emulate sub-filter-scale physical process organization and time evolution on a fine grid and couple the implied coarse-grained tendencies with a forecast model. A two-way interaction is envisaged so that fine-grid physics (e.g. deep convective clouds) responds to forecast model fields. The fine-grid model may be as simple as a two-dimensional cellular automaton or as computationally demanding as a cloud-resolving model similar to the coupling strategy envisaged in 'super-parametrization'. Computer codes used in computer games and visualization software illustrate the potential for cheap but realistic simulation where emphasis is placed on algorithmic stability and visual realism rather than pointwise accuracy in a predictive sense. In an ensemble prediction context, a computationally cheap technique would be essential and some possibilities are outlined. An idealized proof-of-concept simulation is described, which highlights technical problems such as the nature of the coupling.
A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem
NASA Astrophysics Data System (ADS)
Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao
A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.
Kim, Oleg; McMurdy, John; Jay, Gregory; Lines, Collin; Crawford, Gregory; Alber, Mark
2014-01-01
Abstract A combination of stochastic photon propagation model in a multilayered human eyelid tissue and reflectance spectroscopy was used to study palpebral conjunctiva spectral reflectance for hemoglobin (Hgb) determination. The developed model is the first biologically relevant model of eyelid tissue, which was shown to provide very good approximation to the measured spectra. Tissue optical parameters were defined using previous histological and microscopy studies of a human eyelid. After calibration of the model parameters the responses of reflectance spectra to Hgb level and blood oxygenation variations were calculated. The stimulated reflectance spectra in adults with normal and low Hgb levels agreed well with experimental data for Hgb concentrations from 8.1 to 16.7 g/dL. The extracted Hgb levels were compared with in vitro Hgb measurements. The root mean square error of cross‐validation was 1.64 g/dL. The method was shown to provide 86% sensitivity estimates for clinically diagnosed anemia cases. A combination of the model with spectroscopy measurements provides a new tool for noninvasive study of human conjunctiva to aid in diagnosing blood disorders such as anemia. PMID:24744871
Arthur, Aaron D.; Le Roux, Jakobus A.
2013-08-01
Observations by the plasma and magnetic field instruments on board the Voyager 2 spacecraft suggest that the termination shock is weak with a compression ratio of {approx}2. However, this is contrary to the observations of accelerated particle spectra at the termination shock, where standard diffusive shock acceleration theory predicts a compression ratio closer to {approx}2.9. Using our focused transport model, we investigate pickup proton acceleration at a stationary spherical termination shock with a moderately strong compression ratio of 2.8 to include both the subshock and precursor. We show that for the particle energies observed by the Voyager 2 Low Energy Charged Particle (LECP) instrument, pickup protons will have effective length scales of diffusion that are larger than the combined subshock and precursor termination shock structure observed. As a result, the particles will experience a total effective termination shock compression ratio that is larger than values inferred by the plasma and magnetic field instruments for the subshock and similar to the value predicted by diffusive shock acceleration theory. Furthermore, using a stochastically varying magnetic field angle, we are able to qualitatively reproduce the multiple power-law structure observed for the LECP spectra downstream of the termination shock.
A stochastic approach to uncertainty in the equations of MHD kinematics
Phillips, Edward G.; Elman, Howard C.
2015-03-01
The magnetohydrodynamic (MHD) kinematics model describes the electromagnetic behavior of an electrically conducting fluid when its hydrodynamic properties are assumed to be known. In particular, the MHD kinematics equations can be used to simulate the magnetic field induced by a given velocity field. While prescribing the velocity field leads to a simpler model than the fully coupled MHD system, this may introduce some epistemic uncertainty into the model. If the velocity of a physical system is not known with certainty, the magnetic field obtained from the model may not be reflective of the magnetic field seen in experiments. Additionally, uncertainty in physical parameters such as the magnetic resistivity may affect the reliability of predictions obtained from this model. By modeling the velocity and the resistivity as random variables in the MHD kinematics model, we seek to quantify the effects of uncertainty in these fields on the induced magnetic field. We develop stochastic expressions for these quantities and investigate their impact within a finite element discretization of the kinematics equations. We obtain mean and variance data through Monte Carlo simulation for several test problems. Toward this end, we develop and test an efficient block preconditioner for the linear systems arising from the discretized equations.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Beyond the SCS curve number: A new stochastic spatial runoff approach
NASA Astrophysics Data System (ADS)
Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.
2015-12-01
The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables.
Pugno, Nicola M; Bosia, Federico; Carpinteri, Alberto
2008-08-01
Thousands of multiscale stochastic simulations are carried out in order to perform the first in-silico tensile tests of carbon nanotube (CNT)-based macroscopic cables with varying length. The longest treated cable is the space-elevator megacable but more realistic shorter cables are also considered in this bottom-up investigation. Different sizes, shapes, and concentrations of defects are simulated, resulting in cable macrostrengths not larger than approximately 10 GPa, which is much smaller than the theoretical nanotube strength (approximately 100 GPa). No best-fit parameters are present in the multiscale simulations: the input at level 1 is directly estimated from nanotensile tests of CNTs, whereas its output is considered as the input for the level 2, and so on up to level 5, corresponding to the megacable. Thus, five hierarchical levels are used to span lengths from that of a single nanotube (approximately 100 nm) to that of the space-elevator megacable (approximately 100 Mm). PMID:18666164
Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables.
Pugno, Nicola M; Bosia, Federico; Carpinteri, Alberto
2008-08-01
Thousands of multiscale stochastic simulations are carried out in order to perform the first in-silico tensile tests of carbon nanotube (CNT)-based macroscopic cables with varying length. The longest treated cable is the space-elevator megacable but more realistic shorter cables are also considered in this bottom-up investigation. Different sizes, shapes, and concentrations of defects are simulated, resulting in cable macrostrengths not larger than approximately 10 GPa, which is much smaller than the theoretical nanotube strength (approximately 100 GPa). No best-fit parameters are present in the multiscale simulations: the input at level 1 is directly estimated from nanotensile tests of CNTs, whereas its output is considered as the input for the level 2, and so on up to level 5, corresponding to the megacable. Thus, five hierarchical levels are used to span lengths from that of a single nanotube (approximately 100 nm) to that of the space-elevator megacable (approximately 100 Mm).
NASA Astrophysics Data System (ADS)
Wu, Zhizhang; Huang, Zhongyi
2016-07-01
In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid
Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu
2016-01-01
Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO2 emissions and operation costs in UCS and LCS. PMID:27322281
Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.
Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu
2016-01-01
Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS. PMID:27322281
Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.
Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu
2016-06-17
Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.
Gérard, Claude; Gonze, Didier; Lemaigre, Frédéric; Novák, Béla
2014-01-01
Recently, a molecular pathway linking inflammation to cell transformation has been discovered. This molecular pathway rests on a positive inflammatory feedback loop between NF-κB, Lin28, Let-7 microRNA and IL6, which leads to an epigenetic switch allowing cell transformation. A transient activation of an inflammatory signal, mediated by the oncoprotein Src, activates NF-κB, which elicits the expression of Lin28. Lin28 decreases the expression of Let-7 microRNA, which results in higher level of IL6 than achieved directly by NF-κB. In turn, IL6 can promote NF-κB activation. Finally, IL6 also elicits the synthesis of STAT3, which is a crucial activator for cell transformation. Here, we propose a computational model to account for the dynamical behavior of this positive inflammatory feedback loop. By means of a deterministic model, we show that an irreversible bistable switch between a transformed and a non-transformed state of the cell is at the core of the dynamical behavior of the positive feedback loop linking inflammation to cell transformation. The model indicates that inhibitors (tumor suppressors) or activators (oncogenes) of this positive feedback loop regulate the occurrence of the epigenetic switch by modulating the threshold of inflammatory signal (Src) needed to promote cell transformation. Both stochastic simulations and deterministic simulations of a heterogeneous cell population suggest that random fluctuations (due to molecular noise or cell-to-cell variability) are able to trigger cell transformation. Moreover, the model predicts that oncogenes/tumor suppressors respectively decrease/increase the robustness of the non-transformed state of the cell towards random fluctuations. Finally, the model accounts for the potential effect of competing endogenous RNAs, ceRNAs, on the dynamics of the epigenetic switch. Depending on their microRNA targets, the model predicts that ceRNAs could act as oncogenes or tumor suppressors by regulating the occurrence of
A stochastic chemical dynamic approach to correlate autoimmunity and optimal vitamin-D range.
Roy, Susmita; Shrinivas, Krishna; Bagchi, Biman
2014-01-01
Motivated by several recent experimental observations that vitamin-D could interact with antigen presenting cells (APCs) and T-lymphocyte cells (T-cells) to promote and to regulate different stages of immune response, we developed a coarse grained but general kinetic model in an attempt to capture the role of vitamin-D in immunomodulatory responses. Our kinetic model, developed using the ideas of chemical network theory, leads to a system of nine coupled equations that we solve both by direct and by stochastic (Gillespie) methods. Both the analyses consistently provide detail information on the dependence of immune response to the variation of critical rate parameters. We find that although vitamin-D plays a negligible role in the initial immune response, it exerts a profound influence in the long term, especially in helping the system to achieve a new, stable steady state. The study explores the role of vitamin-D in preserving an observed bistability in the phase diagram (spanned by system parameters) of immune regulation, thus allowing the response to tolerate a wide range of pathogenic stimulation which could help in resisting autoimmune diseases. We also study how vitamin-D affects the time dependent population of dendritic cells that connect between innate and adaptive immune responses. Variations in dose dependent response of anti-inflammatory and pro-inflammatory T-cell populations to vitamin-D correlate well with recent experimental results. Our kinetic model allows for an estimation of the range of optimum level of vitamin-D required for smooth functioning of the immune system and for control of both hyper-regulation and inflammation. Most importantly, the present study reveals that an overdose or toxic level of vitamin-D or any steroid analogue could give rise to too large a tolerant response, leading to an inefficacy in adaptive immune function. PMID:24971516
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-12-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Matsumoto, Tomotaka; Mineta, Katsuhiko; Osada, Naoki; Araki, Hitoshi
2015-01-01
Recent studies suggest the existence of a stochasticity in gene expression (SGE) in many organisms, and its non-negligible effect on their phenotype and fitness. To date, however, how SGE affects the key parameters of population genetics are not well understood. SGE can increase the phenotypic variation and act as a load for individuals, if they are at the adaptive optimum in a stable environment. On the other hand, part of the phenotypic variation caused by SGE might become advantageous if individuals at the adaptive optimum become genetically less-adaptive, for example due to an environmental change. Furthermore, SGE of unimportant genes might have little or no fitness consequences. Thus, SGE can be advantageous, disadvantageous, or selectively neutral depending on its context. In addition, there might be a genetic basis that regulates magnitude of SGE, which is often referred to as "modifier genes," but little is known about the conditions under which such an SGE-modifier gene evolves. In the present study, we conducted individual-based computer simulations to examine these conditions in a diploid model. In the simulations, we considered a single locus that determines organismal fitness for simplicity, and that SGE on the locus creates fitness variation in a stochastic manner. We also considered another locus that modifies the magnitude of SGE. Our results suggested that SGE was always deleterious in stable environments and increased the fixation probability of deleterious mutations in this model. Even under frequently changing environmental conditions, only very strong natural selection made SGE adaptive. These results suggest that the evolution of SGE-modifier genes requires strict balance among the strength of natural selection, magnitude of SGE, and frequency of environmental changes. However, the degree of dominance affected the condition under which SGE becomes advantageous, indicating a better opportunity for the evolution of SGE in different genetic
Nguyen, Hoa Q; Stamatis, Stephen D; Kirsch, Lee E
2015-09-01
Patient safety risk due to toxic degradation products is a potentially critical quality issue for a small group of useful drug substances. Although the pharmacokinetics of toxic drug degradation products may impact product safety, these data are frequently unavailable. The objective of this study is to incorporate the prediction capability of physiologically based pharmacokinetic (PBPK) models into a rational drug degradation product risk assessment procedure using a series of model drug degradants (substituted anilines). The PBPK models were parameterized using a combination of experimental and literature data and computational methods. The impact of model parameter uncertainty was incorporated into stochastic risk assessment procedure for estimating human safe exposure levels based on the novel use of a statistical metric called "PROB" for comparing probability that a human toxicity-target tissue exposure exceeds the rat exposure level at a critical no-observed-adverse-effect level. When compared with traditional risk assessment calculations, this novel PBPK approach appeared to provide a rational basis for drug instability risk assessment by focusing on target tissue exposure and leveraging physiological, biochemical, biophysical knowledge of compounds and species. PMID:25900395
NASA Astrophysics Data System (ADS)
Yiotis, Andreas G.; Kainourgiakis, Michael E.; Charalambopoulou, Georgia C.; Stubos, Athanassios K.
2016-07-01
A novel process-based methodology is proposed for the stochastic reconstruction and accurate characterisation of Carbon fiber-based matrices, which are commonly used as Gas Diffusion Layers in Proton Exchange Membrane Fuel Cells. The modeling approach is efficiently complementing standard methods used for the description of the anisotropic deposition of carbon fibers, with a rigorous model simulating the spatial distribution of the graphitized resin that is typically used to enhance the structural properties and thermal/electrical conductivities of the composite Gas Diffusion Layer materials. The model uses as input typical pore and continuum scale properties (average porosity, fiber diameter, resin content and anisotropy) of such composites, which are obtained from X-ray computed microtomography measurements on commercially available carbon papers. This information is then used for the digital reconstruction of realistic composite fibrous matrices. By solving the corresponding conservation equations at the microscale in the obtained digital domains, their effective transport properties, such as Darcy permeabilities, effective diffusivities, thermal/electrical conductivities and void tortuosity, are determined focusing primarily on the effects of medium anisotropy and resin content. The calculated properties are matching very well with those of Toray carbon papers for reasonable values of the model parameters that control the anisotropy of the fibrous skeleton and the materials resin content.
Nguyen, Hoa Q; Stamatis, Stephen D; Kirsch, Lee E
2015-09-01
Patient safety risk due to toxic degradation products is a potentially critical quality issue for a small group of useful drug substances. Although the pharmacokinetics of toxic drug degradation products may impact product safety, these data are frequently unavailable. The objective of this study is to incorporate the prediction capability of physiologically based pharmacokinetic (PBPK) models into a rational drug degradation product risk assessment procedure using a series of model drug degradants (substituted anilines). The PBPK models were parameterized using a combination of experimental and literature data and computational methods. The impact of model parameter uncertainty was incorporated into stochastic risk assessment procedure for estimating human safe exposure levels based on the novel use of a statistical metric called "PROB" for comparing probability that a human toxicity-target tissue exposure exceeds the rat exposure level at a critical no-observed-adverse-effect level. When compared with traditional risk assessment calculations, this novel PBPK approach appeared to provide a rational basis for drug instability risk assessment by focusing on target tissue exposure and leveraging physiological, biochemical, biophysical knowledge of compounds and species.
NASA Astrophysics Data System (ADS)
Ma, H.; Fan, C.; Zhang, P.; Zhang, J.; Qiao, C.; Wang, H.
2012-03-01
An adaptive optics system utilizing a Shack-Hartmann wavefront sensor and a deformable mirror can successfully correct a distorted wavefront by the conjugation principle. However, if a wave propagates over such a path that scintillation is not negligible, the appearance of branch points makes least-squares reconstruction fail to estimate the wavefront effectively. An adaptive optics technique based on the stochastic parallel gradient descent (SPGD) control algorithm is an alternative approach which does not need wavefront information but optimizes the performance metric directly. Performance was evaluated by simulating a SPGD control system and conventional adaptive correction with least-squares reconstruction in the context of a laser beam projection system. We also examined the relative performance of coping with branch points by the SPGD technique through an example. All studies were carried out under the conditions of assuming the systems have noise-free measurements and infinite time control bandwidth. Results indicate that the SPGD adaptive system always performs better than the system based on the least-squares wavefront reconstruction technique in the presence of relatively serious intensity scintillations. The reason is that the SPGD adaptive system has the ability of compensating a discontinuous phase, although the phase is not detected and reconstructed.
NASA Astrophysics Data System (ADS)
Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.
2009-04-01
Drought is one of the major weather driven natural hazards, which has most harm impacts on environment, agricultural and hydrological factors than the other hazards. In spite of the fact that Hungary - that country is situated in Central Europe - belongs to the continental climate zone (influenced by Atlantic and Mediterranean streams) and this weather conditions should be favourable for agricultural production, the drought is a serious risk factor in Hungary, especially on the so called "Great Hungarian Plain", which area has been hit by severe drought events. These drought events encouraged the Ministry of Environment and Water of Hungary to embark on a countrywide drought planning programme to coordinate drought planning efforts throughout the country, to ensure that available water is used efficiently and to provide guidance on how drought planning can be accomplished. With regard to this plan, it is indispensable to analyze the regional drought frequency and duration in the target region of the programme as fundamental information for the further works. According to these aims, first we initiated a methodological development for simulating drought in a non-contributing area. As a result of this work, it has been agreed that the most appropriate model structure for our purposes using a spatially distributed physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model embedded into a Markov Chain-Monte Carlo (MCMC) algorithm for estimate multi-year drought frequency and duration. In this framework: - the spatially distributed SVAT component simulates all the fundamental SVAT processes (such as: interception, snow-accumulation and melting, infiltration, water uptake by vegetation and evapotranspiration, vertical and horizontal distribution of soil moisture, etc.) taking the groundwater table as lower, and the hydrometeorological fields as upper boundary conditions into account; - and the MCMC based stochastic component generates time series of daily weather
Stochastic Models of Human Growth.
ERIC Educational Resources Information Center
Goodrich, Robert L.
Stochastic difference equations of the Box-Jenkins form provide an adequate family of models on which to base the stochastic theory of human growth processes, but conventional time series identification methods do not apply to available data sets. A method to identify structure and parameters of stochastic difference equation models of human…
NASA Astrophysics Data System (ADS)
Ren, W. X.; Lin, Y. Q.; Fang, S. E.
2011-11-01
One of the key issues in vibration-based structural health monitoring is to extract the damage-sensitive but environment-insensitive features from sampled dynamic response measurements and to carry out the statistical analysis of these features for structural damage detection. A new damage feature is proposed in this paper by using the system matrices of the forward innovation model based on the covariance-driven stochastic subspace identification of a vibrating system. To overcome the variations of the system matrices, a non-singularity transposition matrix is introduced so that the system matrices are normalized to their standard forms. For reducing the effects of modeling errors, noise and environmental variations on measured structural responses, a statistical pattern recognition paradigm is incorporated into the proposed method. The Mahalanobis and Euclidean distance decision functions of the damage feature vector are adopted by defining a statistics-based damage index. The proposed structural damage detection method is verified against one numerical signal and two numerical beams. It is demonstrated that the proposed statistics-based damage index is sensitive to damage and shows some robustness to the noise and false estimation of the system ranks. The method is capable of locating damage of the beam structures under different types of excitations. The robustness of the proposed damage detection method to the variations in environmental temperature is further validated in a companion paper by a reinforced concrete beam tested in the laboratory and a full-scale arch bridge tested in the field.
A stochastic multi-symplectic scheme for stochastic Maxwell equations with additive noise
Hong, Jialin; Zhang, Liying
2014-07-01
In this paper we investigate a stochastic multi-symplectic method for stochastic Maxwell equations with additive noise. Based on the stochastic version of variational principle, we find a way to obtain the stochastic multi-symplectic structure of three-dimensional (3-D) stochastic Maxwell equations with additive noise. We propose a stochastic multi-symplectic scheme and show that it preserves the stochastic multi-symplectic conservation law and the local and global stochastic energy dissipative properties, which the equations themselves possess. Numerical experiments are performed to verify the numerical behaviors of the stochastic multi-symplectic scheme.
Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance
NASA Astrophysics Data System (ADS)
Kato, H.; Ito, K.
2009-01-01
A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.
A stochastic HMM-based forecasting model for fuzzy time series.
Li, Sheng-Tun; Cheng, Yi-Chung
2010-10-01
Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.
Definition of efficient scarcity-based water pricing policies through stochastic programming
NASA Astrophysics Data System (ADS)
Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.
2015-01-01
Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares river basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.
Definition of efficient scarcity-based water pricing policies through stochastic programming
NASA Astrophysics Data System (ADS)
Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.
2015-09-01
Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares River basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.
NASA Astrophysics Data System (ADS)
Wang, Tingting; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun
2016-05-01
Many real-world data can be represented as dynamic networks which are the evolutionary networks with timestamps. Analyzing dynamic attributes is important to understanding the structures and functions of these complex networks. Especially, studying the influential nodes is significant to exploring and analyzing networks. In this paper, we propose a method to identify influential nodes in dynamic social networks based on identifying such nodes in the temporal communities which make up the dynamic networks. Firstly, we detect the community structures of all the snapshot networks based on the degree-corrected stochastic block model (DCBM). After getting the community structures, we capture the evolution of every community in the dynamic network by the extended Jaccard’s coefficient which is defined to map communities among all the snapshot networks. Then we obtain the initial influential nodes of the dynamic network and aggregate them based on three widely used centrality metrics. Experiments on real-world and synthetic datasets demonstrate that our method can identify influential nodes in dynamic networks accurately, at the same time, we also find some interesting phenomena and conclusions for those that have been validated in complex network or social science.
Detailed characterization of a fractured limestone formation using stochastic inverse approaches
Gupta, A.D.; Vasco, D.W.; Long, J.C.S.
1994-07-01
We discuss here two inverse approaches to construction of fracture flow models and their application in characterizing a fractured limestone formation. The first approach creates ``equivalent discontinuum`` models that conceptualize the fracture system as a partially filled lattice of conductors which are locally connected or disconnected to reproduce the observed hydrologic behavior. An alternative approach viz. ``variable aperture lattice`` models represent the fracture system as a fully filled network composed of conductors of varying apertures. The fracture apertures are sampled from a specified distribution, usually log-normal consistent with field data. The spatial arrangement of apertures is altered through inverse modeling so as to fit the available hydrologic data. Unlike traditional fracture network approaches which rely on fracture geometry to reproduce flow and transport behavior, the inverse methods directly incorporate hydrologic data in deriving the fracture networks and thus naturally emphasize the underlying features that impact the fluid flow and transport. However, hydrologic models derived by inversion are non-unique in general. We have addressed such non-uniqueness by examining an ensemble of models that satisfy the observational data within acceptable limits. We then determine properties which are shared by the ensemble of models as well as their associated uncertainties to create a conceptual model of the fracture system.
A general stochastic approach to unavailability analysis of standby safety systems
Van Der Weide, H.; Pandey, M. D.
2013-07-01
The paper presents a general analytical framework to analyze unavailability caused by latent failures in standby safety systems used in nuclear plants. The proposed approach is general in a sense that it encompasses a variety of inspection and maintenance policies and relaxes restrictive assumptions regarding the distributions of time to failure (or aging) and duration of repair. A key result of the paper is a general integral equation for point unavailability, which can be tailored to any specific maintenance policy. (authors)
Internal wave signal processing: A model-based approach
Candy, J.V.; Chambers, D.H.
1995-02-22
A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (depth) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. These models are then generalized to the stochastic case where an approximate Gauss-Markov theory applies. The resulting Gauss-Markov representation, in principle, allows the inclusion of stochastic phenomena such as noise and modeling errors in a consistent manner. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves. In particular, a processor is designed that allows in situ recursive estimation of the required velocity functions. Finally, it is shown that the associated residual or so-called innovation sequence that ensues from the recursive nature of this formulation can be employed to monitor the model`s fit to the data.
Enns, Eva A; Brandeau, Margaret L
2015-04-21
For many communicable diseases, knowledge of the underlying contact network through which the disease spreads is essential to determining appropriate control measures. When behavior change is the primary intervention for disease prevention, it is important to understand how to best modify network connectivity using the limited resources available to control disease spread. We describe and compare four algorithms for selecting a limited number of links to remove from a network: two "preventive" approaches (edge centrality, R0 minimization), where the decision of which links to remove is made prior to any disease outbreak and depends only on the network structure; and two "reactive" approaches (S-I edge centrality, optimal quarantining), where information about the initial disease states of the nodes is incorporated into the decision of which links to remove. We evaluate the performance of these algorithms in minimizing the total number of infections that occur over the course of an acute outbreak of disease. We consider different network structures, including both static and dynamic Erdös-Rényi random networks with varying levels of connectivity, a real-world network of residential hotels connected through injection drug use, and a network exhibiting community structure. We show that reactive approaches outperform preventive approaches in averting infections. Among reactive approaches, removing links in order of S-I edge centrality is favored when the link removal budget is small, while optimal quarantining performs best when the link removal budget is sufficiently large. The budget threshold above which optimal quarantining outperforms the S-I edge centrality algorithm is a function of both network structure (higher for unstructured Erdös-Rényi random networks compared to networks with community structure or the real-world network) and disease infectiousness (lower for highly infectious diseases). We conduct a value-of-information analysis of knowing which
NASA Astrophysics Data System (ADS)
Mohamed, Majeed; Narayan Kar, Indra
2015-11-01
This paper focuses on a stochastic version of contraction theory to construct observer-controller structure for a flight dynamic system with noisy velocity measurement. A nonlinear stochastic observer is designed to estimate the pitch rate, the pitch angle, and the velocity of an aircraft example model using stochastic contraction theory. Estimated states are used to compute feedback control for solving a tracking problem. The structure and gain selection of the observer is carried out using Itô's stochastic differential equations and the contraction theory. The contraction property of integrated observer-controller structure is derived to ensure the exponential convergence of the trajectories of closed-loop nonlinear system. The upper bound of the state estimation error is explicitly derived and the efficacy of the proposed observer-controller structure has been shown through the numerical simulations.
A view of EPR non-locality problems based on Aron's stochastic foundation of relativity
NASA Astrophysics Data System (ADS)
Scheer, Jens
1990-12-01
It is argued that the problem of causal anomalies that still may exist in Vigier's explanation of superluminal EPR type correlations may be removed in the framework of Aron's stochastic foundation of relativity.
Research on spatial state conversion rule mining and stochastic predicting based on CA
NASA Astrophysics Data System (ADS)
Li, Xinyun; Kong, Xiangqiang
2007-06-01
Spatial dynamic prediction in GIS is the process of spatial calculation that infers the thematic maps in future according to the historical thematic maps, and it is space-time calculation from map to map. There is great application value that spatial dynamic prediction applied to the land planning, urban land-use planning and town planning, but there is some imperfect in method and technique at present. The main technical difficulty is excavation and expression of spatial state conversion rule. In allusion to the deficiency in spatial dynamic prediction using CA, the method which excavated spatial state conversion rule based on spatial data mining was put forward. Stochastic simulation mechanism was put into the prediction calculating based on state conversion rule. The result of prediction was more rational and the relation between the prediction steps and the time course was clearer. The method was applied to prediction of spatial structure change of urban land-use in Jinan. The Urban land-use change maps were predicted in 2006 and 2010 by using the land-use maps in 1998 and 2002. The result of this test was rational by analyzing.
NASA Astrophysics Data System (ADS)
Zhang, Xiaofei; Hu, Niaoqing; Hu, Lei; Fan, Bin; Cheng, Zhe
2012-05-01
By signal pre-whitening based on cepstrum editing,the envelope analysis can be done over the full bandwidth of the pre-whitened signal, and this enhances the bearing characteristic frequencies. The bearing faults detection could be enhanced without knowledge of the optimum frequency bands to demodulate, however, envelope analysis over full bandwidth brings more noise interference. Stochastic resonance (SR), which is now often used in weak signal detection, is an important nonlinear effect. By normalized scale transform, SR can be applied in weak signal detection of machinery system. In this paper, signal pre-whitening based on cepstrum editing and SR theory are combined to enhance the detection of bearing fault. The envelope spectrum kurtosis of bearing fault characteristic components is used as indicators of bearing faults. Detection results of planted bearing inner race faults on a test rig show the enhanced detecting effects of the proposed method. And the indicators of bearing inner race faults enhanced by SR are compared to the ones without enhancement to validate the proposed method.
NASA Astrophysics Data System (ADS)
Song, Jiyun; Wang, Zhi-Hua
2016-05-01
Urban land-atmosphere interactions can be captured by numerical modeling framework with coupled land surface and atmospheric processes, while the model performance depends largely on accurate input parameters. In this study, we use an advanced stochastic approach to quantify parameter uncertainty and model sensitivity of a coupled numerical framework for urban land-atmosphere interactions. It is found that the development of urban boundary layer is highly sensitive to surface characteristics of built terrains. Changes of both urban land use and geometry impose significant impact on the overlying urban boundary layer dynamics through modification on bottom boundary conditions, i.e., by altering surface energy partitioning and surface aerodynamic resistance, respectively. Hydrothermal properties of conventional and green roofs have different impacts on atmospheric dynamics due to different surface energy partitioning mechanisms. Urban geometry (represented by the canyon aspect ratio), however, has a significant nonlinear impact on boundary layer structure and temperature. Besides, managing rooftop roughness provides an alternative option to change the boundary layer thermal state through modification of the vertical turbulent transport. The sensitivity analysis deepens our insight into the fundamental physics of urban land-atmosphere interactions and provides useful guidance for urban planning under challenges of changing climate and continuous global urbanization.
Stochastic description of quantum Brownian dynamics
NASA Astrophysics Data System (ADS)
Yan, Yun-An; Shao, Jiushu
2016-08-01
Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems
X. Frank Xu
2010-03-30
Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will
A stochastic approach to project planning in an R and D environment: Final report
Seyedghasemipour, S.J.
1987-02-01
This study describes a simulation approach to project planning in an R and D environment by network model. GERT (Graphical Evaluation and Review Technique), a network model, was utilized for the modeling of a hypothetical research and development project. GERT is a network model capable of including randomness in activity duration, probabilistic branching, feedback loop, and multiple terminate node in a project planning. These capabilities make it more suitable for modeling of research and development projects than the previous approaches such as CPM and PERT. SLAM II simulation language is utilized for simulation of the network model. SLAM II is a simulation language which heavily relies on GASP IV and Q-GERTS with powerful modeling capability in a single integrated framework. The simulation is performed on a hypothetical ''standard'' research and development project. Two cases of project planning are considered. In the first case, the traditional simulation of network model of the hypothetical R and D project is performed. In the second case, learning factor is incorporated in the simulation process. Learning factor, in the context of project planning, means the mean and variance of a probability distribution representing an activity duration is discounted (reduced) every time that activity is repeated. The results and statistics of each case study concerning expected duration of successful completion of the project, probability of washouts, and realization time of milestones are presented in details. The differences between two cases (i.e., with and without learning factor) are discussed. 19 refs.
A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type
Hosking, John Joseph Absalom
2012-12-15
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.
Stochastic mapping of the Michaelis-Menten mechanism
NASA Astrophysics Data System (ADS)
Dóka, Éva; Lente, Gábor
2012-02-01
The Michaelis-Menten mechanism is an extremely important tool for understanding enzyme-catalyzed transformation of substrates into final products. In this work, a computationally viable, full stochastic description of the Michaelis-Menten kinetic scheme is introduced based on a stochastic equivalent of the steady-state assumption. The full solution derived is free of restrictions on amounts of substance or parameter values and is used to create stochastic maps of the Michaelis-Menten mechanism, which show the regions in the parameter space of the scheme where the use of the stochastic kinetic approach is inevitable. The stochastic aspects of recently published examples of single-enzyme kinetic studies are analyzed using these maps.
Evaluation of soil characterization technologies using a stochastic, value-of-information approach
Kaplan, P.G.
1993-11-01
The US Department of Energy has initiated an integrated demonstration program to develop and compare new technologies for the characterization of uranium-contaminated soils. As part of this effort, a performance-assessment task was funded in February, 1993 to evaluate the field tested technologies. Performance assessment can be cleaned as the analysis that evaluates a system`s, or technology`s, ability to meet the criteria specified for performance. Four new technologies were field tested at the Fernald Environmental Management Restoration Co. in Ohio. In the next section, the goals of this performance assessment task are discussed. The following section discusses issues that must be resolved if the goals are to be successfully met. The author concludes with a discussion of the potential benefits to performance assessment of the approach taken. This paper is intended to be the first of a series of documentation that describes the work. Also in this proceedings is a paper on the field demonstration at the Fernald site and a description of the technologies (Tidwell et al, 1993) and a paper on the application of advanced geostatistical techniques (Rautman, 1993). The overall approach is to simply demonstrate the applicability of concepts that are well described in the literature but are not routinely applied to problems in environmental remediation, restoration, and waste management. The basic geostatistical concepts are documented in Clark (1979) and in Issaks and Srivastava (1989). Advanced concepts and applications, along with software, are discussed in Deutsch and Journel (1992). Integration of geostatistical modeling with a decision-analytic framework is discussed in Freeze et al (1992). Information-theoretic and probabilistic concepts are borrowed from the work of Shannon (1948), Jaynes (1957), and Harr (1987). The author sees the task as one of introducing and applying robust methodologies with demonstrated applicability in other fields to the problem at hand.
Value of Geographic Diversity of Wind and Solar: Stochastic Geometry Approach; Preprint
Diakov, V.
2012-08-01
Based on the available geographically dispersed data for the continental U.S. (excluding Alaska), we analyze to what extent the geographic diversity of these resources can offset their variability. A geometric model provides a convenient measure for resource variability, shows the synergy between wind and solar resources.
Stochastic Kinetics of Viral Capsid Assembly Based on Detailed Protein Structures
Hemberg, Martin; Yaliraki, Sophia N.; Barahona, Mauricio
2006-01-01
We present a generic computational framework for the simulation of viral capsid assembly which is quantitative and specific. Starting from PDB files containing atomic coordinates, the algorithm builds a coarse-grained description of protein oligomers based on graph rigidity. These reduced protein descriptions are used in an extended Gillespie algorithm to investigate the stochastic kinetics of the assembly process. The association rates are obtained from a diffusive Smoluchowski equation for rapid coagulation, modified to account for water shielding and protein structure. The dissociation rates are derived by interpreting the splitting of oligomers as a process of graph partitioning akin to the escape from a multidimensional well. This modular framework is quantitative yet computationally tractable, with a small number of physically motivated parameters. The methodology is illustrated using two different viruses which are shown to follow quantitatively different assembly pathways. We also show how in this model the quasi-stationary kinetics of assembly can be described as a Markovian cascading process, in which only a few intermediates and a small proportion of pathways are present. The observed pathways and intermediates can be related a posteriori to structural and energetic properties of the capsid oligomers. PMID:16473916
Luo, B; Li, J B; Huang, G H; Li, H L
2006-05-15
This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural non-point source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and "off-site" water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties.
NASA Astrophysics Data System (ADS)
Ross, Steven M.
A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is presented, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed recursively using Radau pseudospectral collocation methods and sequential quadratic programming for the optimal path and an Unscented Kalman Filter for the target position estimate. The underlying real-time optimal control (RTOC) algorithm is developed, specifically addressing limitations of current techniques that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. System integration, variable timing methods, and discontinuity management techniques are provided for actual hardware implementation. Validation is accomplished with both simulation and flight test, autonomously landing a quadrotor helicopter on a wire.
Eigenvalue density of linear stochastic dynamical systems: A random matrix approach
NASA Astrophysics Data System (ADS)
Adhikari, S.; Pastur, L.; Lytova, A.; Du Bois, J.
2012-02-01
Eigenvalue problems play an important role in the dynamic analysis of engineering systems modeled using the theory of linear structural mechanics. When uncertainties are considered, the eigenvalue problem becomes a random eigenvalue problem. In this paper the density of the eigenvalues of a discretized continuous system with uncertainty is discussed by considering the model where the system matrices are the Wishart random matrices. An analytical expression involving the Stieltjes transform is derived for the density of the eigenvalues when the dimension of the corresponding random matrix becomes asymptotically large. The mean matrices and the dispersion parameters associated with the mass and stiffness matrices are necessary to obtain the density of the eigenvalues in the frameworks of the proposed approach. The applicability of a simple eigenvalue density function, known as the Marenko-Pastur (MP) density, is investigated. The analytical results are demonstrated by numerical examples involving a plate and the tail boom of a helicopter with uncertain properties. The new results are validated using an experiment on a vibrating plate with randomly attached spring-mass oscillators where 100 nominally identical samples are physically created and individually tested within a laboratory framework.
NASA Astrophysics Data System (ADS)
Cantet, P.; Arnaud, P.
2012-10-01
Since the last decade, copulas have become more and more widespread in the construction of hydrological models. Unlike the multivariate statistics which are traditionally used, this tool enables scientists to model different dependence structures without drawbacks. The authors propose to apply copulas to improve the performance of an existing model. The hourly rainfall stochastic model SHYPRE is based on the simulation of descriptive variables. It generates long series of hourly rainfall and enables the estimation of distribution quantiles for different climates. The paper focuses on the relationship between two variables describing the rainfall signal. First, Kendall's tau is estimated on each of the 217 rain gauge stations in France, then the False Discovery Rate procedure is used to define stations for which the dependence is significant. Among three usual archimedean copulas, a unique 2-copula is chosen to model this dependence for any station. Modelling dependence leads to an obvious improvement in the reproduction of the standard and extreme statistics of maximum rainfall, especially for the sub-daily rainfall. An accuracy test for the extreme values shows the good asymptotic behaviour of the new rainfall generator version and the impacts of the copula choice on extreme quantile estimation.
Assessment of hydraulic parameters in the phreatic aquifer of Settolo (Italy): a stochastic approach
NASA Astrophysics Data System (ADS)
Salandin, P.; Zovi, F.; Camporese, M.
2012-12-01
In this work we present and test against real field data an inversion approach for the identification of hydraulic parameters at the aquifer scale. Our test field is the alluvial phreatic aquifer of Settolo, located along the left bank of the Piave River in a piedmont area in Northeastern Italy, with an extension of approximately 6 km2 and exhibiting heterogeneities of the geological structures both at the local and intermediate scales. The area is characterized by the alluvial sediments (mainly gravel in a sandy matrix) deposited by the Piave River during the Last Glacial Maximum, being the subsurface, with an average aquifer thickness of 50 m, crossed by paleo-riverbeds that probably represent the main hydrogeological unit from which water is withdrawn. The interactions between watercourses and the aquifer, the recharge linked to the precipitation, as well as the dynamics of partially penetrating extraction wells must be properly reproduced for an effective protection and a sustainable exploitation of the water resources. In order to do so, in cooperation with Alto Trevigiano Servizi S.r.l., the local water resources management company, a careful site characterization is in progress since 2009, with a number of different measurements and scales involved. Besides surface ERT, water quality surveys, and a tracer test, we highlight here the role of 18 continuously monitored observation wells, available in the study area for the measurement of the water table dynamics and the calibration/validation of groundwater models. A preliminary comparison with the results of a three-dimensional Richards model demonstrated that the site can be properly described by means of a two-dimensional finite element solver of the nonlinear Dupuit-Boussinesq equation, saving CPU time and computer storage. Starting from an ensemble of randomly generated and spatially correlated hydraulic conductivity (K) fields, the fit between water table observations and model predictions is measured
A stochastic filtering approach to recover strain images from quasi-static ultrasound elastography
2014-01-01
Background Model-based reconstruction algorithms have shown potentials over conventional strain-based methods in quasi-static elastographic image by using realistic finite element (FE) or bio-mechanical model constraints. However, it is still difficult to properly handle the discrepancies between the model constraint and ultrasound data, and the measurement noise. Methods In this paper, we explore the usage of Kalman filtering algorithm for the estimation of strain imaging in quasi-static ultrasound elastography. The proposed strategy formulates the displacement distribution through biomechanical models, and the ultrasound-derived measurements through observation equations. Through this filtering strategy, the discrepancies are quantitatively modelled as one Gaussian white noise, and the measurement noise of ultrasound data is modelled as another independent Gaussian white noise. The optimal estimation of kinematic functions, i.e. the full displacement and velocity field, are computed through this Kalman filter. Then the strain images can be easily calculated from the estimated displacement field. Results The accuracy and robustness of our proposed framework is first evaluated in synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantoms and patients with favourable results. Conclusions The potential of our algorithm is to provide the distribution of mechanically meaningful strain under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing process noise and measurement noise in our framework, and then the mechanically meaningful strain is estimated through the Kalman filter in the minimum mean square error (MMSE) sense. PMID:24521481
Solan, Eilon; Vieille, Nicolas
2015-01-01
In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883
A rainwater harvesting system reliability model based on nonparametric stochastic rainfall generator
NASA Astrophysics Data System (ADS)
Basinger, Matt; Montalto, Franco; Lall, Upmanu
2010-10-01
SummaryThe reliability with which harvested rainwater can be used as a means of flushing toilets, irrigating gardens, and topping off air-conditioner serving multifamily residential buildings in New York City is assessed using a new rainwater harvesting (RWH) system reliability model. Although demonstrated with a specific case study, the model is portable because it is based on a nonparametric rainfall generation procedure utilizing a bootstrapped markov chain. Precipitation occurrence is simulated using transition probabilities derived for each day of the year based on the historical probability of wet and dry day state changes. Precipitation amounts are selected from a matrix of historical values within a moving 15 day window that is centered on the target day. RWH system reliability is determined for user-specified catchment area and tank volume ranges using precipitation ensembles generated using the described stochastic procedure. The reliability with which NYC backyard gardens can be irrigated and air conditioning units supplied with water harvested from local roofs exceeds 80% and 90%, respectively, for the entire range of catchment areas and tank volumes considered in the analysis. For RWH systems installed on the most commonly occurring rooftop catchment areas found in NYC (51-75 m 2), toilet flushing demand can be met with 7-40% reliability, with lower end of the range representing buildings with high flow toilets and no storage elements, and the upper end representing buildings that feature low flow fixtures and storage tanks of up to 5 m 3. When the reliability curves developed are used to size RWH systems to flush the low flow toilets of all multifamily buildings found a typical residential neighborhood in the Bronx, rooftop runoff inputs to the sewer system are reduced by approximately 28% over an average rainfall year, and potable water demand is reduced by approximately 53%.
Enhanced detection of rolling element bearing fault based on stochastic resonance
NASA Astrophysics Data System (ADS)
Zhang, Xiaofei; Hu, Niaoqing; Cheng, Zhe; Hu, Lei
2012-11-01
Early bearing faults can generate a series of weak impacts. All the influence factors in measurement may degrade the vibration signal. Currently, bearing fault enhanced detection method based on stochastic resonance(SR) is implemented by expensive computation and demands high sampling rate, which requires high quality software and hardware for fault diagnosis. In order to extract bearing characteristic frequencies component, SR normalized scale transform procedures are presented and a circuit module is designed based on parameter-tuning bistable SR. In the simulation test, discrete and analog sinusoidal signals under heavy noise are enhanced by SR normalized scale transform and circuit module respectively. Two bearing fault enhanced detection strategies are proposed. One is realized by pure computation with normalized scale transform for sampled vibration signal, and the other is carried out by designed SR hardware with circuit module for analog vibration signal directly. The first strategy is flexible for discrete signal processing, and the second strategy demands much lower sampling frequency and less computational cost. The application results of the two strategies on bearing inner race fault detection of a test rig show that the local signal to noise ratio of the characteristic components obtained by the proposed methods are enhanced by about 50% compared with the band pass envelope analysis for the bearing with weaker fault. In addition, helicopter transmission bearing fault detection validates the effectiveness of the enhanced detection strategy with hardware. The combination of SR normalized scale transform and circuit module can meet the need of different application fields or conditions, thus providing a practical scheme for enhanced detection of bearing fault.
NASA Astrophysics Data System (ADS)
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
A suboptimal stochastic controller for an N-body spacecraft
NASA Technical Reports Server (NTRS)
Larson, V.
1973-01-01
Considerable attention, in the open literature, is being focused on the problem of developing a suitable set of deterministic dynamical equations for a complex spacecraft. This paper considers the problem of determining a stochastic optimal controller for an n-body spacecraft. The approach used in obtaining the stochastic controller involves the application, interpretation, and combination of advanced dynamical principles and the theoretical aspects of modern control theory. The stochastic controller obtained herein for a complicated model of a spacecraft uses sensor angular measurements associated with the base body to obtain smoothed estimates of the entire state vector. It can be easily implemented, and it enables system performance to be significantly improved.
Existence Theory for Stochastic Power Law Fluids
NASA Astrophysics Data System (ADS)
Breit, Dominic
2015-06-01
We consider the equations of motion for an incompressible non-Newtonian fluid in a bounded Lipschitz domain during the time interval (0, T) together with a stochastic perturbation driven by a Brownian motion W. The balance of momentum reads as where v is the velocity, the pressure and f an external volume force. We assume the common power law model and show the existence of martingale weak solution provided . Our approach is based on the -truncation and a harmonic pressure decomposition which are adapted to the stochastic setting.
NASA Astrophysics Data System (ADS)
Gammaitoni, Luca; Hänggi, Peter; Jung, Peter; Marchesoni, Fabio
1998-01-01
Over the last two decades, stochastic resonance has continuously attracted considerable attention. The term is given to a phenomenon that is manifest in nonlinear systems whereby generally feeble input information (such as a weak signal) can be be amplified and optimized by the assistance of noise. The effect requires three basic ingredients: (i) an energetic activation barrier or, more generally, a form of threshold; (ii) a weak coherent input (such as a periodic signal); (iii) a source of noise that is inherent in the system, or that adds to the coherent input. Given these features, the response of the system undergoes resonance-like behavior as a function of the noise level; hence the name stochastic resonance. The underlying mechanism is fairly simple and robust. As a consequence, stochastic resonance has been observed in a large variety of systems, including bistable ring lasers, semiconductor devices, chemical reactions, and mechanoreceptor cells in the tail fan of a crayfish. In this paper, the authors report, interpret, and extend much of the current understanding of the theory and physics of stochastic resonance. They introduce the readers to the basic features of stochastic resonance and its recent history. Definitions of the characteristic quantities that are important to quantify stochastic resonance, together with the most important tools necessary to actually compute those quantities, are presented. The essence of classical stochastic resonance theory is presented, and important applications of stochastic resonance in nonlinear optics, solid state devices, and neurophysiology are described and put into context with stochastic resonance theory. More elaborate and recent developments of stochastic resonance theory are discussed, ranging from fundamental quantum properties-being important at low temperatures-over spatiotemporal aspects in spatially distributed systems, to realizations in chaotic maps. In conclusion the authors summarize the achievements
Stochastically driven genetic circuits
NASA Astrophysics Data System (ADS)
Tsimring, L. S.; Volfson, D.; Hasty, J.
2006-06-01
Transcriptional regulation in small genetic circuits exhibits large stochastic fluctuations. Recent experiments have shown that a significant fraction of these fluctuations is caused by extrinsic factors. In this paper we review several theoretical and computational approaches to modeling of small genetic circuits driven by extrinsic stochastic processes. We propose a simplified approach to this problem, which can be used in the case when extrinsic fluctuations dominate the stochastic dynamics of the circuit (as appears to be the case in eukaryots). This approach is applied to a model of a single nonregulated gene that is driven by a certain gating process that affects the rate of transcription, and to a simplified version of the galactose utilization circuit in yeast.
NASA Astrophysics Data System (ADS)
Dabaghi, Mayssa Nabil
A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as the natural variability of motions for a given earthquake and site scenario. By fitting the model to a database of recorded near-fault ground motions with known earthquake source and site characteristics, empirical "observations" of the model parameters are obtained. These observations are used to develop predictive equations for the model parameters in terms of a small number of earthquake source and site characteristics. Functional forms for the predictive equations that are consistent with seismological theory are employed. A site-based simulation procedure that employs the proposed stochastic model and predictive equations is developed to generate synthetic near-fault ground motions at a site. The procedure is formulated in terms of information about the earthquake design scenario that is normally available to a design engineer. Not all near-fault ground motions contain a forward directivity pulse, even when the conditions for such a pulse are favorable. The proposed procedure produces pulselike and non-pulselike motions in the same proportions as they naturally occur among recorded near-fault ground motions for a given design scenario. The proposed models and simulation procedure are validated by several means. Synthetic ground motion time series with fitted parameter values are compared with the corresponding recorded motions. The proposed empirical predictive relations are compared to similar relations available in the literature. The overall simulation procedure is
Boedicker, J.; Li, L; Kline, T; Ismagilov, R
2008-01-01
This article describes plug-based microfluidic technology that enables rapid detection and drug susceptibility screening of bacteria in samples, including complex biological matrices, without pre-incubation. Unlike conventional bacterial culture and detection methods, which rely on incubation of a sample to increase the concentration of bacteria to detectable levels, this method confines individual bacteria into droplets nanoliters in volume. When single cells are confined into plugs of small volume such that the loading is less than one bacterium per plug, the detection time is proportional to plug volume. Confinement increases cell density and allows released molecules to accumulate around the cell, eliminating the pre-incubation step and reducing the time required to detect the bacteria. We refer to this approach as stochastic confinement. Using the microfluidic hybrid method, this technology was used to determine the antibiogram - or chart of antibiotic sensitivity - of methicillin-resistant Staphylococcus aureus (MRSA) to many antibiotics in a single experiment and to measure the minimal inhibitory concentration (MIC) of the drug cefoxitin (CFX) against this strain. In addition, this technology was used to distinguish between sensitive and resistant strains of S. aureus in samples of human blood plasma. High-throughput microfluidic techniques combined with single-cell measurements also enable multiple tests to be performed simultaneously on a single sample containing bacteria. This technology may provide a method of rapid and effective patient-specific treatment of bacterial infections and could be extended to a variety of applications that require multiple functional tests of bacterial samples on reduced timescales.
NASA Astrophysics Data System (ADS)
Pivovarov, Dmytro; Steinmann, Paul
2016-09-01
In the current work we apply the stochastic version of the FEM to the homogenization of magneto-elastic heterogeneous materials with random microstructure. The main aim of this study is to capture accurately the discontinuities appearing at matrix-inclusion interfaces. We demonstrate and compare three different techniques proposed in the literature for the purely mechanical problem, i.e. global, local and enriched stochastic basis functions. Moreover, we demonstrate the implementation of the isoparametric concept in the enlarged physical-stochastic product space. The Gauss integration rule in this multidimensional space is discussed. In order to design a realistic stochastic Representative Volume Element we analyze actual scans obtained by electron microscopy and provide numerical studies of the micro particle distribution. The SFEM framework described in our previous work (Pivovarov and Steinmann in Comput Mech 57(1): 123-147, 2016) is extended to the case of the magneto-elastic materials. To this end, the magneto-elastic energy function is used, and the corresponding hyper-tensors of the magneto-elastic problem are introduced. In order to estimate the methods' accuracy we performed a set of simulations for elastic and magneto-elastic problems using three different SFEM modifications. All results are compared with "brute-force" Monte-Carlo simulations used as reference solution.
Jenny, Patrick Torrilhon, Manuel; Heinz, Stefan
2010-02-20
In this paper, a stochastic model is presented to simulate the flow of gases, which are not in thermodynamic equilibrium, like in rarefied or micro situations. For the interaction of a particle with others, statistical moments of the local ensemble have to be evaluated, but unlike in molecular dynamics simulations or DSMC, no collisions between computational particles are considered. In addition, a novel integration technique allows for time steps independent of the stochastic time scale. The stochastic model represents a Fokker-Planck equation in the kinetic description, which can be viewed as an approximation to the Boltzmann equation. This allows for a rigorous investigation of the relation between the new model and classical fluid and kinetic equations. The fluid dynamic equations of Navier-Stokes and Fourier are fully recovered for small relaxation times, while for larger values the new model extents into the kinetic regime. Numerical studies demonstrate that the stochastic model is consistent with Navier-Stokes in that limit, but also that the results become significantly different, if the conditions for equilibrium are invalid. The application to the Knudsen paradox demonstrates the correctness and relevance of this development, and comparisons with existing kinetic equations and standard solution algorithms reveal its advantages. Moreover, results of a test case with geometrically complex boundaries are presented.
Wang, Xugao; Wiegand, Thorsten; Kraft, Nathan J B; Swenson, Nathan G; Davies, Stuart J; Hao, Zhanqing; Howe, Robert; Lin, Yiching; Ma, Keping; Mi, Xiangcheng; Su, Sheng-Hsin; Sun, I-fang; Wolf, Amy
2016-02-01
Recent theory predicts that stochastic dilution effects may result in species-rich communities with statistically independent species spatial distributions, even if the underlying ecological processes structuring the community are driven by deterministic niche differences. Stochastic dilution is a consequence of the stochastic geometry of biodiversity where the identities of the nearest neighbors of individuals of a given species are largely unpredictable. Under such circumstances, the outcome of deterministic species interactions may vary greatly among individuals of a given species. Consequently, nonrandom patterns in the biotic neighborhoods of species, which might be expected from coexistence or community assembly theory (e.g., individuals of a given species are neighbored by phylogenetically similar species), are weakened or do not emerge, resulting in statistical independence of species spatial distributions. We used data on phylogenetic and functional similarity of tree species in five large forest dynamics plots located across a gradient of species richness to test predictions of the stochastic dilution hypothesis. To quantify the biotic neighborhood of a focal species we used the mean phylogenetic (or functional) dissimilarity of the individuals of the focal species to all species within a local neighborhood. We then compared the biotic neighborhood of species to predictions from stochastic null models to test if a focal species was surrounded by more or less similar species than expected by chance. The proportions of focal species that showed spatial independence with respect to their biotic neighborhoods increased with total species richness. Locally dominant, high-abundance species were more likely to be surrounded by species that were statistically more similar or more dissimilar than expected by chance. Our results suggest that stochasticity may play a stronger role in shaping the spatial structure of species rich tropical forest communities than it
NASA Astrophysics Data System (ADS)
D'Amico, Sebastiano
2011-12-01
The evaluation of the expected peak ground motion caused by an earthquake is an important problem in earthquake seismology. It is particularly important for regions where strong-motion data are lacking. With the approach presented in this study of using data from small earthquakes, it is possible to extrapolate the peak motion parameters beyond the magnitude range of the weak-motion data set on which they are calculated. To provide a description of the high frequency attenuation and ground motion parameters in southern Italy we used seismic recordings coming from two different projects: the SAPTEX (Southern Apennines Tomography Experiment) and the CAT/SCAN (Calabria Apennine Tyrrhenian - Subduction Collision Accretion Network). We used about 10,000 records with magnitudes between M=2.5 and M=4.7. Using regression model with the large number of weak-motion data, the regional propagation and the absolute source scaling were determined. To properly calibrate the source scaling it was necessary to compute moment magnitudes of several events in the data set. We computed the moment tensor solutions using the "Cut And Paste" and the SLUMT methods. Both methods determine the source depth, moment magnitude and focal mechanisms using a grid search technique. The methods provide quality solutions in the area in a magnitude range (2.5-4.5) that has been too small to be included in the Italian national earthquake catalogues. The derived database of focal mechanisms allowed us to better detail the transitional area in the Messina Strait between the extensional domain related to subduction trench retreat (southern Calabria) and the compressional one associated with continental collision (central-western Sicily). Stochastic simulations are generated for finite-fault ruptures using the derived propagation parameters to predict the absolute peaks of the ground acceleration for several faults, magnitude, and distance range, as well as beyond the magnitude range of the weak
Cui, Baotong
2010-01-01
This paper aims to analyze global robust exponential stability in the mean square sense of stochastic discrete-time genetic regulatory networks with stochastic delays and parameter uncertainties. Comparing to the previous research works, time-varying delays are assumed to be stochastic whose variation ranges and probability distributions of the time-varying delays are explored. Based on the stochastic analysis approach and some analysis techniques, several sufficient criteria for the global robust exponential stability in the mean square sense of the networks are derived. Moreover, two numerical examples are presented to show the effectiveness of the obtained results. PMID:21629588
Stochastic Turing patterns in the Brusselator model.
Biancalani, Tommaso; Fanelli, Duccio; Di Patti, Francesca
2010-04-01
A stochastic version of the Brusselator model is proposed and studied via the system size expansion. The mean-field equations are derived and shown to yield to organized Turing patterns within a specific parameters region. When determining the Turing condition for instability, we pay particular attention to the role of cross-diffusive terms, often neglected in the heuristic derivation of reaction-diffusion schemes. Stochastic fluctuations are shown to give rise to spatially ordered solutions, sharing the same quantitative characteristic of the mean-field based Turing scenario, in term of excited wavelengths. Interestingly, the region of parameter yielding to the stochastic self-organization is wider than that determined via the conventional Turing approach, suggesting that the condition for spatial order to appear can be less stringent than customarily believed. PMID:20481815
Stochastic Turing patterns in the Brusselator model
NASA Astrophysics Data System (ADS)
Biancalani, Tommaso; Fanelli, Duccio; di Patti, Francesca
2010-04-01
A stochastic version of the Brusselator model is proposed and studied via the system size expansion. The mean-field equations are derived and shown to yield to organized Turing patterns within a specific parameters region. When determining the Turing condition for instability, we pay particular attention to the role of cross-diffusive terms, often neglected in the heuristic derivation of reaction-diffusion schemes. Stochastic fluctuations are shown to give rise to spatially ordered solutions, sharing the same quantitative characteristic of the mean-field based Turing scenario, in term of excited wavelengths. Interestingly, the region of parameter yielding to the stochastic self-organization is wider than that determined via the conventional Turing approach, suggesting that the condition for spatial order to appear can be less stringent than customarily believed.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
Materiality in a Practice-Based Approach
ERIC Educational Resources Information Center
Svabo, Connie
2009-01-01
Purpose: The paper aims to provide an overview of the vocabulary for materiality which is used by practice-based approaches to organizational knowing. Design/methodology/approach: The overview is theoretically generated and is based on the anthology Knowing in Organizations: A Practice-based Approach edited by Nicolini, Gherardi and Yanow. The…
2014-01-01
Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations. PMID:24939084
NASA Astrophysics Data System (ADS)
Gonçalves, Bruno; Ajelli, Marco; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José; Merler, Stefano; Vespignani, Alessandro
2010-03-01
We provide for the first time a side by side comparison of the results obtained with a stochastic agent based model and a structured metapopulation stochastic model for the evolution of a baseline pandemic event in Italy. The Agent Based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high resolution census data worldwide, and integrating airline travel flow data with short range human mobility patterns at the global scale. Both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing of the order of few days. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes.
Forgoston, Eric; Billings, Lora; Yecko, Philip; Schwartz, Ira B.
2011-01-01
We consider the problem of stochastic prediction and control in a time-dependent stochastic environment, such as the ocean, where escape from an almost invariant region occurs due to random fluctuations. We determine high-probability control-actuation sets by computing regions of uncertainty, almost invariant sets, and Lagrangian coherent structures. The combination of geometric and probabilistic methods allows us to design regions of control, which provide an increase in loitering time while minimizing the amount of control actuation. We show how the loitering time in almost invariant sets scales exponentially with respect to the control actuation, causing an exponential increase in loitering times with only small changes in actuation force. The result is that the control actuation makes almost invariant sets more invariant. PMID:21456830
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
Craven, C Jeremy
2016-01-01
We present a reanalysis of the stochastic model of organelle production and show that the equilibrium distributions for the organelle numbers predicted by this model can be readily calculated in three different scenarios. These three distributions can be identified as standard distributions, and the corresponding exact formulae for their mean and variance can therefore be used in further analysis. This removes the need to rely on stochastic simulations or approximate formulae (derived using the fluctuation dissipation theorem). These calculations allow for further analysis of the predictions of the model. On the basis of this we question the extent to which the model can be used to conclude that peroxisome biogenesis is dominated by de novo production when Saccharomyces cerevisiae cells are grown on glucose medium. DOI: http://dx.doi.org/10.7554/eLife.10167.001 PMID:26783763
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
Stochastic models of intracellular calcium signals
NASA Astrophysics Data System (ADS)
Rüdiger, Sten
2014-01-01
Cellular signaling operates in a noisy environment shaped by low molecular concentrations and cellular heterogeneity. For calcium release through intracellular channels-one of the most important cellular signaling mechanisms-feedback by liberated calcium endows fluctuations with critical functions in signal generation and formation. In this review it is first described, under which general conditions the environment makes stochasticity relevant, and which conditions allow approximating or deterministic equations. This analysis provides a framework, in which one can deduce an efficient hybrid description combining stochastic and deterministic evolution laws. Within the hybrid approach, Markov chains model gating of channels, while the concentrations of calcium and calcium binding molecules (buffers) are described by reaction-diffusion equations. The article further focuses on the spatial representation of subcellular calcium domains related to intracellular calcium channels. It presents analysis for single channels and clusters of channels and reviews the effects of buffers on the calcium release. For clustered channels, we discuss the application and validity of coarse-graining as well as approaches based on continuous gating variables (Fokker-Planck and chemical Langevin equations). Comparison with recent experiments substantiates the stochastic and spatial approach, identifies minimal requirements for a realistic modeling, and facilitates an understanding of collective channel behavior. At the end of the review, implications of stochastic and local modeling for the generation and properties of cell-wide release and the integration of calcium dynamics into cellular signaling models are discussed.
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, Y.
2014-12-01
This research explores the interactions between data quantity, data quality and heterogeneity resolution on stochastic inversion of a physically based model. To further investigate aquifer heterogeneity, simulations are used to examine the impact of geostatistical models on inversion quality, as well as the spatial sensitivity to heterogeneity using local and global methods. The model domain is a two-dimensional steady-state confined aquifer with lateral flows through two hydrofacies with alternating patterns.To examine general effects, the control variable method was adopted to reveal the impact of three factors on estimated hydraulic conductivity (K) and hydraulic head boundary conditions (BCs): (1) data availability, (2) data error, and (3) characterization of heterogeneity. Results show that fewer data increase model sensitivity to measurement error and heterogeneity. Extremely large data errors can cause severe model deterioration, regardless of sufficient data availability or high resolution representation of heterogeneity. Smaller data errors can alleviate the bias caused by the limited observations. For heterogeneity resolution, once general patterns of geological structures are captured, its influence is minimal compared to the other factors.Next, two geostatistical models (spherical and exponential variograms), were used to explore the representation of heterogeneity under the same nugget effects. The results show that stochastic inversion based on the exponential variogram improves both the precision and accuracy of the inverse model, as compared to the spherical variogram. This difference is particularly important for determining accurate BCs through stochastic inversion.Last, sensitivity analysis was conducted to further investigate the effect of varying the K of each hydrofacies on model inversion. Results from the partial local method show that the inversion is more sensitive to perturbations of K in regions with high heterogeneity. Using the
NASA Astrophysics Data System (ADS)
Shioya, Tsubasa; Fujimoto, Yasutaka
In this paper, we introduce a simulator for ice thermal storage systems. Basically, the refrigeration system is modeled as a linear discrete-time system. For system identifications, the least square method is used. However, it is difficult to identify the switching time of the electromagnetic valve of brine pipes attached at showcases accurately by this method. In order to overcome this difficulty, a simulator based on the stochastic switched ARX model is developed. The data obtained from the simulator are compared with actual data. We verify the effectiveness of the proposed simulator.
NASA Astrophysics Data System (ADS)
Sankaran, Sethuraman; Feinstein, Jeffrey; Marsden, Alison
2009-11-01
In the field of cardiovascular medicine, predictive finite element simulations that compute the hemodynamics of blood flow, particle residence times, as well as shear stresses induced on arterial walls could aid in surgical intervention. These simulations lack accurate input data and are often polluted with uncertainties in model geometry, blood inlet velocities and outlet boundary conditions. We develop a robust design framework to optimize geometrical parameters in cardiovascular simulations that accounts for diverse sources of uncertainties. Stochastic cost functions are incorporated into the design framework using their lower order statistical moments. The adaptive stochastic collocation technique embedded within a derivative-free optimization technique is employed. Numerical examples representative of cardiovascular geometries, including robust design on various anastomoses is presented and the efficiency of the adaptive collocation algorithm is shown.
NASA Astrophysics Data System (ADS)
Wang, Hui; Wellmann, Florian
2016-04-01
It is generally accepted that 3D geological models inferred from observed data will contain a certain amount of uncertainties. The uncertainty quantification and stochastic sampling methods are essential for gaining the insight into the geological variability of subsurface structures. In the community of deterministic or traditional modelling techniques, classical geo-statistical methods using boreholes (hard data sets) are still most widely accepted although suffering certain drawbacks. Modern geophysical measurements provide us regional data sets in 2D or 3D spaces either directly from sensors or indirectly from inverse problem solving using observed signal (soft data sets). We propose a stochastic modelling framework to extract subsurface heterogeneity from multiple and complementary types of data. In the presented work, subsurface heterogeneity is considered as the "hidden link" among multiple spatial data sets as well as inversion results. Hidden Markov random field models are employed to perform 3D segmentation which is the representation of the "hidden link". Finite Gaussian mixture models are adopted to characterize the statistical parameters of the multiple data sets. The uncertainties are quantified via a Gibbs sampling process under the Bayesian inferential framework. The proposed modelling framework is validated using two numerical examples. The model behavior and convergence are also well examined. It is shown that the presented stochastic modelling framework is a promising tool for the 3D data fusion in the communities of geological modelling and geophysics.
NASA Astrophysics Data System (ADS)
Wurm, Patrick; Ulz, Manfred H.
2016-10-01
The aim of this work is to provide an improved information exchange in hierarchical atomistic-to-continuum settings by applying stochastic approximation methods. For this purpose a typical model belonging to this class is chosen and enhanced. On the macroscale of this particular two-scale model, the balance equations of continuum mechanics are solved using a nonlinear finite element formulation. The microscale, on which a canonical ensemble of statistical mechanics is simulated using molecular dynamics, replaces a classic material formulation. The constitutive behavior is computed on the microscale by computing time averages. However, these time averages are thermal noise-corrupted as the microscale may practically not be tracked for a sufficiently long period of time due to limited computational resources. This noise prevents the model from a classical convergence behavior and creates a setting that shows remarkable resemblance to iteration schemes known from stochastic approximation. This resemblance justifies the use of two averaging strategies known to improve the convergence behavior in stochastic approximation schemes under certain, fairly general, conditions. To demonstrate the effectiveness of the proposed strategies, three numerical examples are studied.
An extended structure-based model based on a stochastic eddy-axis evolution equation
NASA Technical Reports Server (NTRS)
Kassinos, S. C.; Reynolds, W. C.
1995-01-01
We have proposed and implemented an extension of the structure-based model for weak deformations. It was shown that the extended model will correctly reduce to the form of standard k-e models for the case of equilibrium under weak mean strain. The realizability of the extended model is guaranteed by the method of its construction. The predictions of the proposed model were very good for rotating homogeneous shear flows and for irrotational axisymmetric contraction, but were seriously deficient in the case of plane strain and axisymmetric expansion. We have concluded that the problem behind these difficulties lies in the algebraic constitutive equation relating the Reynolds stresses to the structure parameters rather than in the slow model developed here. In its present form, this equation assumes that under irrotational strain the principal axes of the Reynolds stresses remain locked onto those of the eddy-axis tensor. This is correct in the RDT limit, but inappropriate under weaker mean strains, when the non-linear eddy-eddy interactions tend to misalign the two sets of principal axes and create some non-zero theta and gamma.
A stochastic framework for the micromechanical analysis of composites
Williams, T. O.
2004-01-01
The formulation of a stochastic micromechanical framework for the history-dependent analysis of composite materials is presented. The theory introduces the statistics of the underlying material microstructure through the incorporation of probability density functions (PDFs) for the different types of concentration tensors. Thus, the theory takes a fundamentally different approach to the analysis of stochastic composites than those theories that follow the current trend of treating stochastic effects by specifying PDFs for the different microstructural arrangements. The general equations governing the behavior of the different types of localization effects within the composite are presented. Based on these governing equations it is shown that in the case of two-phase composites the entire stochastic description for the history-dependent material behavior reduces to the description only of the PDFs for the elastic localization effects in conjunction with any desired set of deterministic constitutive relations for the individual phases.
Modeling a Nanocantilever-Based Biosensor Using a Stochastically Perturbed Harmonic Oscillator
NASA Astrophysics Data System (ADS)
Snyder, Patrick; Joshi, Amitabh; Serna, Juan D.
2014-05-01
Nanoscale biosensors are devices designed to detect analytes by combining biological components and physicochemical detectors. A well-known design of these sensors involves the implementation of nanocantilevers. These microscopic diving boards are coated with binding probes that have an affinity for a particular amino acid, enzyme or protein in living organisms. When these probes attract target particles, such as biomolecules, the binding of these particles changes the vibrating frequency of the cantilever. This process is random in nature and produces fluctuations in the frequency and damping of the cantilever. In this paper, we studied the effect of these fluctuations using a stochastically perturbed, classical harmonic oscillator.
Mass sensing based on deterministic and stochastic responses of elastically coupled nanocantilevers.
Gil-Santos, Eduardo; Ramos, Daniel; Jana, Anirban; Calleja, Montserrat; Raman, Arvind; Tamayo, Javier
2009-12-01
Coupled nanomechanical systems and their entangled eigenstates offer unique opportunities for the detection of ultrasmall masses. In this paper we show theoretically and experimentally that the stochastic and deterministic responses of a pair of coupled nanocantilevers provide different and complementary information about the added mass of an analyte and its location. This method allows the sensitive detection of minute quantities of mass even in the presence of large initial differences in the active masses of the two cantilevers. Finally, we show the fundamental limits in mass detection of this sensing paradigm.
NASA Astrophysics Data System (ADS)
Sun, Yang; Wu, Ke-nan; Gao, Hong; Jin, Yu-qi
2015-02-01
A novel optimization method, stochastic parallel proportional-integral-derivative (SPPID) algorithm, is proposed for high-resolution phase-distortion correction in wave-front sensorless adaptive optics (WSAO). To enhance the global search and self-adaptation of stochastic parallel gradient descent (SPGD) algorithm, residual error and its temporal integration of performance metric are added in to incremental control signal's calculation. On the basis of the maximum fitting rate between real wave-front and corrector, a goal value of metric is set as the reference. The residual error of the metric relative to reference is transformed into proportional and integration terms to produce adaptive step size updating law of SPGD algorithm. The adaptation of step size leads blind optimization to desired goal and helps escape from local extrema. Different from conventional proportional-integral -derivative (PID) algorithm, SPPID algorithm designs incremental control signal as PI-by-D for adaptive adjustment of control law in SPGD algorithm. Experiments of high-resolution phase-distortion correction in "frozen" turbulences based on influence function coefficients optimization were carried out respectively using 128-by-128 typed spatial light modulators, photo detector and control computer. Results revealed the presented algorithm offered better performance in both cases. The step size update based on residual error and its temporal integration was justified to resolve severe local lock-in problem of SPGD algorithm used in high -resolution adaptive optics.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
GridCell: a stochastic particle-based biological system simulator
Boulianne, Laurier; Al Assaad, Sevin; Dumontier, Michel; Gross, Warren J
2008-01-01
Background Realistic biochemical simulators aim to improve our understanding of many biological processes that would be otherwise very difficult to monitor in experimental studies. Increasingly accurate simulators may provide insights into the regulation of biological processes due to stochastic or spatial effects. Results We have developed GridCell as a three-dimensional simulation environment for investigating the behaviour of biochemical networks under a variety of spatial influences including crowding, recruitment and localization. GridCell enables the tracking and characterization of individual particles, leading to insights on the behaviour of low copy number molecules participating in signaling networks. The simulation space is divided into a discrete 3D grid that provides ideal support for particle collisions without distance calculation and particle search. SBML support enables existing networks to be simulated and visualized. The user interface provides intuitive navigation that facilitates insights into species behaviour across spatial and temporal dimensions. We demonstrate the effect of crowing on a Michaelis-Menten system. Conclusion GridCell is an effective stochastic particle simulator designed to track the progress of individual particles in a three-dimensional space in which spatial influences such as crowding, co-localization and recruitment may be investigated. PMID:18651956
Rezaei, Satar; Zandian, Hamed; Baniasadi, Akram; Moghadam, Telma Zahirian; Delavari, Somayeh; Delavari, Sajad
2016-01-01
Introduction Hospitals are the most expensive health services provider in the world. Therefore, the evaluation of their performance can be used to reduce costs. The aim of this study was to determine the efficiency of the hospitals at the Kurdistan University of Medical Sciences using stochastic frontier analysis (SFA). Methods This was a cross-sectional and retrospective study that assessed the performance of Kurdistan teaching hospitals (n = 12) between 2007 and 2013. The Stochastic Frontier Analysis method was used to achieve this aim. The numbers of active beds, nurses, physicians, and other staff members were considered as input variables, while the inpatient admission was considered as the output. The data were analyzed using Frontier 4.1 software. Results The mean technical efficiency of the hospitals we studied was 0.67. The results of the Cobb-Douglas production function showed that the maximum elasticity was related to the active beds and the elasticity of nurses was negative. Also, the return to scale was increasing. Conclusion The results of this study indicated that the performances of the hospitals were not appropriate in terms of technical efficiency. In addition, there was a capacity enhancement of the output of the hospitals, compared with the most efficient hospitals studied, of about33%. It is suggested that the effect of various factors, such as the quality of health care and the patients’ satisfaction, be considered in the future studies to assess hospitals’ performances. PMID:27054014
Stochastic averaging based on generalized harmonic functions for energy harvesting systems
NASA Astrophysics Data System (ADS)
Jiang, Wen-An; Chen, Li-Qun
2016-09-01
A stochastic averaging method is proposed for nonlinear vibration energy harvesters subject to Gaussian white noise excitation. The generalized harmonic transformation scheme is applied to decouple the electromechanical equations, and then obtained an equivalent nonlinear system which is uncoupled to an electric circuit. The frequency function is given through the equivalent potential energy which is independent of the total energy. The stochastic averaging method is developed by using the generalized harmonic functions. The averaged Itô equations are derived via the proposed procedure, and the Fokker-Planck-Kolmogorov (FPK) equations of the decoupled system are established. The exact stationary solution of the averaged FPK equation is used to determine the probability densities of the amplitude and the power of the stationary response. The procedure is applied to three different type Duffing vibration energy harvesters under Gaussian white excitations. The effects of the system parameters on the mean-square voltage and the output power are examined. It is demonstrated that quadratic nonlinearity only and quadratic combined with properly cubic nonlinearities can increase the mean-square voltage and the output power, respectively. The approximate analytical outcomes are qualitatively and quantitatively supported by the Monte Carlo simulations.
Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.
2012-04-15
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.
NASA Astrophysics Data System (ADS)
Sapin, J. R.; Saito, L.; Rajagopalan, B.; Caldwell, R. J.
2013-12-01
Preservation of the Chinook salmon fishery on the Sacramento River in California has been a major concern since the winter-run Chinook was listed as threatened in 1989. The construction of Shasta Dam and Reservoir in 1945 prevented the salmon from reaching their native cold-water spawning habitat, resulting in severe population declines. The temperature control device (TCD) installed at Shasta Dam in 1997 provides increased capabilities of supplying cold-water habitat downstream of the dam to stimulate salmon spawning. However, increased air temperatures due to climate change could make it more difficult to meet downstream temperature targets with the TCD. By coupling stochastic hydroclimatology generation with two-dimensional hydrodynamic modeling of the reservoir we can simulate TCD operations under extreme climate conditions. This is accomplished by stochastically generating climate and inflow scenarios (created with historical data from NOAA, USGS and USBR) as input into a CE-QUAL-W2 model of the reservoir that can simulate TCD operations. Simulations will investigate if selective withdrawal from multiple gates of the TCD are capable of meeting temperature targets downstream of the dam under extreme hydroclimatic conditions. Moreover, our non-parametric methods for stochastically generating climate and inflow scenarios are capable of producing statistically representative years of extreme wet or extreme dry conditions beyond what is seen in the historical record. This allows us to simulate TCD operations for unprecedented hydroclimatic conditions with implications for climate changes in the watershed. Preliminary results of temperature outputs from simulations of TCD operations under extreme climate conditions with CE-QUAL-W2 will be presented. The conditions chosen for simulation are grounded to real-world managerial concerns by utilizing collaborative workshops with reservoir managers to establish which hydroclimatic scenarios would be of most concern for
Yang, Xinsong; Cao, Jinde; Qiu, Jianlong
2015-05-01
This paper concerns the pth moment synchronization in an array of generally coupled memristor-based neural networks with time-varying discrete delays, unbounded distributed delays, as well as stochastic perturbations. Hybrid controllers are designed to cope with the uncertainties caused by the state-dependent parameters: (a) state feedback controllers combined with delayed impulsive controller; (b) adaptive controller combined with delayed impulsive controller. Based on an impulsive differential inequality, the properties of random variables, the framework of Filippov solution, and Lyapunov functional method, sufficient conditions are derived to guarantee that the considered coupled memristor-based neural networks can be pth moment globally exponentially synchronized onto an isolated node under both of the two classes of hybrid impulsive controllers. Finally, numerical simulations are given to show the effectiveness of the theoretical results.
Bizzozero, Ilaria; Capitani, Erminio; Faglioni, Pietro; Lucchelli, Federica; Saetti, Maria C; Spinnler, Hans
2008-02-01
Recollection of media-mediated past events was examined in 96 healthy participants to investigate the interaction between the age of the subject and the "age" of memories. The results provided evidence that people older than 75 years recall recent events significantly worse than remote ones. Younger participants (47-60 years old) showed the reverse pattern. The implementation of a Markov chains latent-variable stochastic model suggested that reduced efficiency of retrieval rather than storage processes accounts for these results. The findings were interpreted with reference to models of memory trace consolidation, assuming that memory for past public events is dependent on hippocampal structures.
Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.
Sormaz, Milos; Stamm, Tobias; Jenny, Patrick
2010-05-01
This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo. PMID:20448777
NASA Astrophysics Data System (ADS)
Imtanavanich, Prasit; Gupta, Surendra M.
2005-11-01
In this paper, we concentrate on the disassembly-to-order (DTO) system, where end-of-life (EOL) products are taken back from last users to be disassembled to fulfill the demands for components and materials. The objective is to determine the number of EOL products that would be needed to maximize the profit and minimize the costs of the system. The conditions of EOL products are not always certain, which makes the problem difficult. We use a heuristic approach which transforms the stochastic disassembly yields into their deterministic equivalents and use a multi-criteria decision-making technique to solve the problem. In addition, we take the products' ages (and thus their deterioration) into account to determine their yield rates (e.g., older products tend to have lower yield rates for usable components) and generate the DTO plans for multiple periods. A numerical example is considered to illustrate the implementation of the approach.
NASA Astrophysics Data System (ADS)
Goda, Katsuichiro; Yasuda, Tomohiro; Mori, Nobuhito; Mai, P. Martin
2015-06-01
The sensitivity and variability of spatial tsunami inundation footprints in coastal cities and towns due to a megathrust subduction earthquake in the Tohoku region of Japan are investigated by considering different fault geometry and slip distributions. Stochastic tsunami scenarios are generated based on the spectral analysis and synthesis method with regards to an inverted source model. To assess spatial inundation processes accurately, tsunami modeling is conducted using bathymetry and elevation data with 50 m grid resolutions. Using the developed methodology for assessing variability of tsunami hazard estimates, stochastic inundation depth maps can be generated for local coastal communities. These maps are important for improving disaster preparedness by understanding the consequences of different situations/conditions, and by communicating uncertainty associated with hazard predictions. The analysis indicates that the sensitivity of inundation areas to the geometrical parameters (i.e., top-edge depth, strike, and dip) depends on the tsunami source characteristics and the site location, and is therefore complex and highly nonlinear. The variability assessment of inundation footprints indicates significant influence of slip distributions. In particular, topographical features of the region, such as ria coast and near-shore plain, have major influence on the tsunami inundation footprints.
NASA Astrophysics Data System (ADS)
Suo, M. Q.; Li, Y. P.; Huang, G. H.
2011-09-01
In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.
Holden, H.; Hu, Yaozhong
1996-12-31
In modelling the pressure p(x,w) at x {element_of} D {contained_in} R{sup d} of an incompressible fluid in a heterogeneous, isotropic medium with a stochastic permeability k(x,w) {ge} 0, Holden, Lindstrom, Oksendal, Uboe and Zhang [HLOUZ95] studied the following stochastic differential equation div(k(x,w){diamond}{del}p(x,w))=-{line_integral}(x), x{element_of} D/p(x,w)=0, x{element_of}{partial_derivative}D, where {integral} is a given source and {diamond} denotes the Wick product. They proved existence of an explicit solution in (S){sup -1}. In this paper we define a finite difference scheme to approximate the above equation and prove that this scheme converges in (S){sup -1}. To solve the obtained finite difference equation, we prove that an adapted Jacobi iterative method is convergent. The paper is organized as follows: (1) Introduction. (2) The finite difference equation. (3) Uniform integrability of the stopping times. (4) Proof of Theorem 1.2. (5) Continuity of some functionals on D(0,{infinity}). (6) Jacobi`s iterative method. (A) The space (S){sup -1}. (B) Weak convergence. 13 refs.
Universal fuzzy integral sliding-mode controllers for stochastic nonlinear systems.
Gao, Qing; Liu, Lu; Feng, Gang; Wang, Yong
2014-12-01
In this paper, the universal integral sliding-mode controller problem for the general stochastic nonlinear systems modeled by Itô type stochastic differential equations is investigated. One of the main contributions is that a novel dynamic integral sliding mode control (DISMC) scheme is developed for stochastic nonlinear systems based on their stochastic T-S fuzzy approximation models. The key advantage of the proposed DISMC scheme is that two very restrictive assumptions in most existing ISMC approaches to stochastic fuzzy systems have been removed. Based on the stochastic Lyapunov theory, it is shown that the closed-loop control system trajectories are kept on the integral sliding surface almost surely since the initial time, and moreover, the stochastic stability of the sliding motion can be guaranteed in terms of linear matrix inequalities. Another main contribution is that the results of universal fuzzy integral sliding-mode controllers for two classes of stochastic nonlinear systems, along with constructive procedures to obtain the universal fuzzy integral sliding-mode controllers, are provided, respectively. Simulation results from an inverted pendulum example are presented to illustrate the advantages and effectiveness of the proposed approaches.
Stochastic architecture for Hopfield neural nets
NASA Technical Reports Server (NTRS)
Pavel, Sandy
1992-01-01
An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.
Lux, Slawomir A; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin
2016-01-01
The paper reports application of a Markov-like stochastic process agent-based model and a "virtual farm" concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a "bottom-up ethological" approach and emulates behavior of the "primary IPM actors"-large cohorts of individual insects-within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the "virtual farm" approach-were discussed. PMID:27602000
On impulsive integrated pest management models with stochastic effects
Akman, Olcay; Comar, Timothy D.; Hrozencik, Daniel
2015-01-01
We extend existing impulsive differential equation models for integrated pest management (IPM) by including stage structure for both predator and prey as well as by adding stochastic elements in the birth rate of the prey. Based on our model, we propose an approach that incorporates various competing stochastic components. This approach enables us to select a model with optimally determined weights for maximum accuracy and precision in parameter estimation. This is significant in the case of IPM because the proposed model accommodates varying unknown environmental and climatic conditions, which affect the resources needed for pest eradication. PMID:25954144
Stochastic optical active rheology
NASA Astrophysics Data System (ADS)
Lee, Hyungsuk; Shin, Yongdae; Kim, Sun Taek; Reinherz, Ellis L.; Lang, Matthew J.
2012-07-01
We demonstrate a stochastic based method for performing active rheology using optical tweezers. By monitoring the displacement of an embedded particle in response to stochastic optical forces, a rapid estimate of the frequency dependent shear moduli of a sample is achieved in the range of 10-1-103 Hz. We utilize the method to probe linear viscoelastic properties of hydrogels at varied cross-linker concentrations. Combined with fluorescence imaging, our method demonstrates non-linear changes of bond strength between T cell receptors and an antigenic peptide due to force-induced cell activation.
NASA Astrophysics Data System (ADS)
Al-Rashid, Md Mamun; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha
2015-03-01
Switching of single domain multiferroic nanomagnets with electrically generated mechanical strain and with spin torque due to spin current generated via the giant spin Hall effect are two promising energy-efficient methods to switch nanomagnets in magnetic computing devices. However, switching of nanomagnets is always error-prone at room temperature owing to the effect of thermal noise. In this work, we model the strain-based and spin-Hall-effect-based switching of nanomagnetic devices using stochastic Landau-Lifshitz-Gilbert (LLG) equation and present a quantitative comparison in terms of switching time, reliability and energy dissipation. This work is supported by the US National Science Foundation under the SHF-Small Grant CCF-1216614, CAREER Grant CCF-1253370, NEB 2020 Grant ECCS-1124714 and SRC under NRI Task 2203.001.
Jingyi, Zhu
2015-01-01
The detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance (MSR) model was studied in this paper. A numerically stimulating model based on MSR was established. And gas-ionizing experiment by adding electronic white noise to induce 1.65 MHz periodic component in the carbon nanotubes gas sensor was performed. It was found that the signal-to-noise ratio (SNR) spectrum displayed 2 maximal values, which accorded to the change of the broken-line potential function. The experimental results of gas-ionizing experiment demonstrated that periodic component of 1.65 MHz had multiple MSR phenomena, which was in accordance with the numerical stimulation results. In this way, the numerical stimulation method provides an innovative method for the detecting mechanism research of carbon nanotubes gas sensor.
Jingyi, Zhu
2015-01-01
The detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance (MSR) model was studied in this paper. A numerically stimulating model based on MSR was established. And gas-ionizing experiment by adding electronic white noise to induce 1.65 MHz periodic component in the carbon nanotubes gas sensor was performed. It was found that the signal-to-noise ratio (SNR) spectrum displayed 2 maximal values, which accorded to the change of the broken-line potential function. The experimental results of gas-ionizing experiment demonstrated that periodic component of 1.65 MHz had multiple MSR phenomena, which was in accordance with the numerical stimulation results. In this way, the numerical stimulation method provides an innovative method for the detecting mechanism research of carbon nanotubes gas sensor. PMID:26198910
NASA Astrophysics Data System (ADS)
Liu, Zhiyuan; Meng, Qiang
2014-05-01
This paper focuses on modelling the network flow equilibrium problem on a multimodal transport network with bus-based park-and-ride (P&R) system and congestion pricing charges. The multimodal network has three travel modes: auto mode, transit mode and P&R mode. A continuously distributed value-of-time is assumed to convert toll charges and transit fares to time unit, and the users' route choice behaviour is assumed to follow the probit-based stochastic user equilibrium principle with elastic demand. These two assumptions have caused randomness to the users' generalised travel times on the multimodal network. A comprehensive network framework is first defined for the flow equilibrium problem with consideration of interactions between auto flows and transit (bus) flows. Then, a fixed-point model with unique solution is proposed for the equilibrium flows, which can be solved by a convergent cost averaging method. Finally, the proposed methodology is tested by a network example.
Hipp, John R.; Wang, Cheng; Butts, Carter T.; Jose, Rupa; Lakon, Cynthia M.
2015-01-01
Although stochastic actor based models (e.g., as implemented in the SIENA software program) are growing in popularity as a technique for estimating longitudinal network data, a relatively understudied issue is the consequence of missing network data for longitudinal analysis. We explore this issue in our research note by utilizing data from four schools in an existing dataset (the AddHealth dataset) over three time points, assessing the substantive consequences of using four different strategies for addressing missing network data. The results indicate that whereas some measures in such models are estimated relatively robustly regardless of the strategy chosen for addressing missing network data, some of the substantive conclusions will differ based on the missing data strategy chosen. These results have important implications for this burgeoning applied research area, implying that researchers should more carefully consider how they address missing data when estimating such models. PMID:25745276
Stochastic extension of cellular manufacturing systems: a queuing-based analysis
NASA Astrophysics Data System (ADS)
Fardis, Fatemeh; Zandi, Afagh; Ghezavati, Vahidreza
2013-07-01
Clustering parts and machines into part families and machine cells is a major decision in the design of cellular manufacturing systems which is defined as cell formation. This paper presents a non-linear mixed integer programming model to design cellular manufacturing systems which assumes that the arrival rate of parts into cells and machine service rate are stochastic parameters and described by exponential distribution. Uncertain situations may create a queue behind each machine; therefore, we will consider the average waiting time of parts behind each machine in order to have an efficient system. The objective function will minimize summation of idleness cost of machines, sub-contracting cost for exceptional parts, non-utilizing machine cost, and holding cost of parts in the cells. Finally, the linearized model will be solved by the Cplex solver of GAMS, and sensitivity analysis will be performed to illustrate the effectiveness of the parameters.
Variance-based sensitivity indices for stochastic models with correlated inputs
Kala, Zdeněk
2015-03-10
The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
Hossain, Md Kamrul; Kamil, Anton Abdulbasah; Baten, Md Azizul; Mustafa, Adli
2012-01-01
The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989-2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation. PMID:23077500
Debris-flow risk analysis in a managed torrent based on a stochastic life-cycle performance.
Ballesteros Cánovas, J A; Stoffel, M; Corona, C; Schraml, K; Gobiet, A; Tani, S; Sinabell, F; Fuchs, S; Kaitna, R
2016-07-01
Two key factors can affect the functional ability of protection structures in mountains torrents, namely (i) infrastructure maintenance of existing infrastructures (as a majority of existing works is in the second half of their life cycle), and (ii) changes in debris-flow activity as a result of ongoing and expected future climatic changes. Here, we explore the applicability of a stochastic life-cycle performance to assess debris-flow risk in the heavily managed Wartschenbach torrent (Lienz region, Austria) and to quantify associated, expected economic losses. We do so by considering maintenance costs to restore infrastructure in the aftermath of debris-flow events as well as by assessing the probability of check dam failure (e.g., as a result of overload). Our analysis comprises two different management strategies as well as three scenarios defining future changes in debris-flow activity resulting from climatic changes. At the study site, an average debris-flow frequency of 21 events per decade was observed for the period 1950-2000; activity at the site is projected to change by +38% to -33%, according to the climate scenario used. Comparison of the different management alternatives suggests that the current mitigation strategy will allow to reduce expected damage to infrastructure and population almost fully (89%). However, to guarantee a comparable level of safety, maintenance costs is expected to increase by 57-63%, with an increase of maintenance costs by ca. 50% for each intervention. Our analysis therefore also highlights the importance of taking maintenance costs into account for risk assessments realized in managed torrent systems, as they result both from progressive and event-related deteriorations. We conclude that the stochastic life-cycle performance adopted in this study represents indeed an integrated approach to assess the long-term effects and costs of prevention structures in managed torrents. PMID:26994802
Debris-flow risk analysis in a managed torrent based on a stochastic life-cycle performance.
Ballesteros Cánovas, J A; Stoffel, M; Corona, C; Schraml, K; Gobiet, A; Tani, S; Sinabell, F; Fuchs, S; Kaitna, R
2016-07-01
Two key factors can affect the functional ability of protection structures in mountains torrents, namely (i) infrastructure maintenance of existing infrastructures (as a majority of existing works is in the second half of their life cycle), and (ii) changes in debris-flow activity as a result of ongoing and expected future climatic changes. Here, we explore the applicability of a stochastic life-cycle performance to assess debris-flow risk in the heavily managed Wartschenbach torrent (Lienz region, Austria) and to quantify associated, expected economic losses. We do so by considering maintenance costs to restore infrastructure in the aftermath of debris-flow events as well as by assessing the probability of check dam failure (e.g., as a result of overload). Our analysis comprises two different management strategies as well as three scenarios defining future changes in debris-flow activity resulting from climatic changes. At the study site, an average debris-flow frequency of 21 events per decade was observed for the period 1950-2000; activity at the site is projected to change by +38% to -33%, according to the climate scenario used. Comparison of the different management alternatives suggests that the current mitigation strategy will allow to reduce expected damage to infrastructure and population almost fully (89%). However, to guarantee a comparable level of safety, maintenance costs is expected to increase by 57-63%, with an increase of maintenance costs by ca. 50% for each intervention. Our analysis therefore also highlights the importance of taking maintenance costs into account for risk assessments realized in managed torrent systems, as they result both from progressive and event-related deteriorations. We conclude that the stochastic life-cycle performance adopted in this study represents indeed an integrated approach to assess the long-term effects and costs of prevention structures in managed torrents.
Hadjilouka, Agni; Mantzourani, Kyriaki-Sofia; Katsarou, Anastasia; Cavaiuolo, Marina; Ferrante, Antonio; Paramithiotis, Spiros; Mataragas, Marios; Drosinos, Eleftherios H
2015-02-01
The aims of the present study were to determine the prevalence and levels of Listeria monocytogenes and Escherichia coli O157:H7 in rocket and cucumber samples by deterministic (estimation of a single value) and stochastic (estimation of a range of values) approaches. In parallel, the chromogenic media commonly used for the recovery of these microorganisms were evaluated and compared, and the efficiency of an enzyme-linked immunosorbent assay (ELISA)-based protocol was validated. L. monocytogenes and E. coli O157:H7 were detected and enumerated using agar Listeria according to Ottaviani and Agosti plus RAPID' L. mono medium and Fluorocult plus sorbitol MacConkey medium with cefixime and tellurite in parallel, respectively. Identity was confirmed with biochemical and molecular tests and the ELISA. Performance indices of the media and the prevalence of both pathogens were estimated using Bayesian inference. In rocket, prevalence of both L. monocytogenes and E. coli O157:H7 was estimated at 7% (7 of 100 samples). In cucumber, prevalence was 6% (6 of 100 samples) and 3% (3 of 100 samples) for L. monocytogenes and E. coli O157:H7, respectively. The levels derived from the presence-absence data using Bayesian modeling were estimated at 0.12 CFU/25 g (0.06 to 0.20) and 0.09 CFU/25 g (0.04 to 0.170) for L. monocytogenes in rocket and cucumber samples, respectively. The corresponding values for E. coli O157:H7 were 0.59 CFU/25 g (0.43 to 0.78) and 1.78 CFU/25 g (1.38 to 2.24), respectively. The sensitivity and specificity of the culture media differed for rocket and cucumber samples. The ELISA technique had a high level of cross-reactivity. Parallel testing with at least two culture media was required to achieve a reliable result for L. monocytogenes or E. coli O157:H7 prevalence in rocket and cucumber samples.
Stochastic Simulation Tool for Aerospace Structural Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F.; Moore, David F.
2006-01-01
Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.
D'Souza, Adam G.; Feder, David L.
2011-10-15
We examine cluster states transformed by stochastic local operations and classical communication, as a resource for deterministic universal computation driven strictly by projective measurements. We identify circumstances under which such states in one dimension constitute resources for random-length single-qubit rotations, in one case quasideterministically (N-U-N states) and in another probabilistically (B-U-B states). In contrast to the cluster states, the N-U-N states exhibit spin correlation functions that decay exponentially with distance, while the B-U-B states can be arbitrarily locally pure. A two-dimensional square N-U-N lattice is a universal resource for quasideterministic measurement-based quantum computation. Measurements on cubic B-U-B states yield two-dimensional cluster states with bond defects, whose connectivity exceeds the percolation threshold for a critical value of the local purity.
Stochastic entrainment of a stochastic oscillator.
Wang, Guanyu; Peskin, Charles S
2015-01-01
In this work, we consider a stochastic oscillator described by a discrete-state continuous-time Markov chain, in which the states are arranged in a circle, and there is a constant probability per unit time of jumping from one state to the next in a specified direction around the circle. At each of a sequence of equally spaced times, the oscillator has a specified probability of being reset to a particular state. The focus of this work is the entrainment of the oscillator by this periodic but stochastic stimulus. We consider a distinguished limit, in which (i) the number of states of the oscillator approaches infinity, as does the probability per unit time of jumping from one state to the next, so that the natural mean period of the oscillator remains constant, (ii) the resetting probability approaches zero, and (iii) the period of the resetting signal approaches a multiple, by a ratio of small integers, of the natural mean period of the oscillator. In this distinguished limit, we use analytic and numerical methods to study the extent to which entrainment occurs.
Dike Strength Analysis on a Regional Scale Based On a Stochastic Subsoil Model
NASA Astrophysics Data System (ADS)
Koelewijn, A. R.; Vastenburg, E. W.
2013-12-01
[Knoeff et al. 2011] J.G. Knoeff, E.W. Vastenburg, G.A. van den Ham & J. Lopez de la Cruz, Automated levee flood risk management, 5th Int. Conf. on Flood Management, Tokyo, 2011 [Koelewijn et al. 2011] A.R. Koelewijn, G.A.M. Kruse & J. Pruiksma, Stochastic subsoil modelling - first set-up of the model and reliability analyses, report 12042000-002-HYE-0001, Deltares, Delft, 2011 [In Dutch] [Lam et al. 2013] K.S. Lam, P.W. Gill & L.W.A. Zwang, Implementation of new levee strength modules for continuous safety assessments, Comprehensive Flood Risk Managment, Taylor & Francis, London, 2013, 317-326. Dike sections with stochastic subsoil profiles.
Toxin-Based Therapeutic Approaches
Shapira, Assaf; Benhar, Itai
2010-01-01
Protein toxins confer a defense against predation/grazing or a superior pathogenic competence upon the producing organism. Such toxins have been perfected through evolution in poisonous animals/plants and pathogenic bacteria. Over the past five decades, a lot of effort has been invested in studying their mechanism of action, the way they contribute to pathogenicity and in the development of antidotes that neutralize their action. In parallel, many research groups turned to explore the pharmaceutical potential of such toxins when they are used to efficiently impair essential cellular processes and/or damage the integrity of their target cells. The following review summarizes major advances in the field of toxin based therapeutics and offers a comprehensive description of the mode of action of each applied toxin. PMID:22069564
NASA Technical Reports Server (NTRS)
Hsia, Wei-Shen
1986-01-01
In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.
Aquifer Structure Identification Using Stochastic Inversion
Harp, Dylan R; Dai, Zhenxue; Wolfsberg, Andrew V; Vrugt, Jasper A
2008-01-01
This study presents a stochastic inverse method for aquifer structure identification using sparse geophysical and hydraulic response data. The method is based on updating structure parameters from a transition probability model to iteratively modify the aquifer structure and parameter zonation. The method is extended to the adaptive parameterization of facies hydraulic parameters by including these parameters as optimization variables. The stochastic nature of the statistical structure parameters leads to nonconvex objective functions. A multi-method genetically adaptive evolutionary approach (AMALGAM-SO) was selected to perform the inversion given its search capabilities. Results are obtained as a probabilistic assessment of facies distribution based on indicator cokriging simulation of the optimized structural parameters. The method is illustrated by estimating the structure and facies hydraulic parameters of a synthetic example with a transient hydraulic response.
A graph-based N-body approximation with application to stochastic neighbor embedding.
Parviainen, Eli
2016-03-01
We propose a novel approximation technique, bubble approximation (BA), for repulsion forces in an N-body problem, where attraction has a limited range and repulsion acts between all points. These kinds of systems occur frequently in dimension reduction and graph drawing. Like tree codes, the established N-body approximation method, BA replaces several point-to-point computations by one area-to-point computation. Novelty of BA is to consider not only the magnitudes but also the directions of forces from the area. Therefore, its area-to-point approximations are applicable anywhere in the space. The joint effect of forces from inside the area is calculated analytically, assuming a homogeneous mass of points inside the area. These two features free BA from hierarchical data structures and complicated bookkeeping of interactions, which plague tree codes. Instead, BA uses a simple graph to control the computations. The graph provides a sparse matrix, which, suitably weighted, replaces the full matrix of pairwise comparisons in the N-body problem. As a concrete example, we implement a sparse-matrix version of stochastic neighbor embedding (a dimension reduction method), and demonstrate its good performance by comparisons to full-matrix method, and to three different approximate versions of the same method. PMID:26690681
A graph-based N-body approximation with application to stochastic neighbor embedding.
Parviainen, Eli
2016-03-01
We propose a novel approximation technique, bubble approximation (BA), for repulsion forces in an N-body problem, where attraction has a limited range and repulsion acts between all points. These kinds of systems occur frequently in dimension reduction and graph drawing. Like tree codes, the established N-body approximation method, BA replaces several point-to-point computations by one area-to-point computation. Novelty of BA is to consider not only the magnitudes but also the directions of forces from the area. Therefore, its area-to-point approximations are applicable anywhere in the space. The joint effect of forces from inside the area is calculated analytically, assuming a homogeneous mass of points inside the area. These two features free BA from hierarchical data structures and complicated bookkeeping of interactions, which plague tree codes. Instead, BA uses a simple graph to control the computations. The graph provides a sparse matrix, which, suitably weighted, replaces the full matrix of pairwise comparisons in the N-body problem. As a concrete example, we implement a sparse-matrix version of stochastic neighbor embedding (a dimension reduction method), and demonstrate its good performance by comparisons to full-matrix method, and to three different approximate versions of the same method.
Hui, Guohua; Zhang, Jianfeng; Li, Jian; Zheng, Le
2016-04-15
Quantitative and qualitative determination of sucrose from complex tastant mixtures using Cu foam electrode was investigated in this study. Cu foam was prepared and its three-dimensional (3-D) mesh structure was characterized by scanning electron microscopy (SEM). Cu foam was utilized as working electrode in three-electrode electrochemical system. Cyclic voltammetry (CV) scanning results exhibited the oxidation procedure of sucrose on Cu foam electrode. Amperometric i-t scanning results indicated that Cu foam electrode selectively responded to sucrose from four tastant mixtures with low limit of detection (LOD) of 35.34 μM, 49.85 μM, 45.89 μM, and 26.81 μM, respectively. The existence of quinine, NaCl, citric acid (CA) and their mixtures had no effect on sucrose detection. Furthermore, mixtures containing different tastants could be discriminated by non-linear double-layered cascaded series stochastic resonance (DCSSR) output signal-to-noise ratio (SNR) eigen peak parameters of CV measurement data. The proposed method provides a promising way for sweetener analysis of commercial food.
Automated 3D reconstruction of coronary artery tree based on stochastic branch and bound
NASA Astrophysics Data System (ADS)
Buhler, Patrick; Rebholz, Philipp; Hesser, Jurgen
2005-04-01
The paper discusses a new method for reconstructing vessel trees from biplane X-Ray projections. The used method reconstructs corresponding points in less than a second and is thus ideally suited for interventional procedures where time is essential. Biplane reconstruction is a two-fold problem: find corresponding points in both images and reconstruct the vessel segments between successive corresponding points in 3D. In this paper we solve the first problem using a new branch and bound technique based on Bayesian networks. With epipolar geometry we assign each of the vessel bifurcation/crossing/endpoint in one image a set of corresponding points in the second image. Starting with the vessel of largest diameter as root node we successively build up a tree of all possible solutions. Branches are cut according to probabilistic conditions (branch&bound based global search for the best solution). Each node is thus a possible partial tree for which we assign a conditional probability that the assignment of corresponding points is correct. The probability is the joint probability of having the correct topology, connectivity, tree and segment shape, characteristics of bifurcations. The respective probabilities for each bifurcation are measured from CTA data of real patients and the probability of the node is computed via a Bayesian network. If the assigned probability is too small, the branch is pruned. Further, for performance reasons we use A*-search where the most probable solution gets favored. All corresponding points are found in less then one second and both, topology and vessel crossings, are identified correctly. This method is thus by orders of magnitude faster than competing ones. This approach is therefore focused on both an automatic and robust method for 3D biplane reconstruction on one hand and an interactive method on the other hand. Further, it can be trained on a typical set of patients in order to obtain as reliable information as possible about the 3D
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-12-12
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
Lux, Slawomir A.; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin
2016-01-01
The paper reports application of a Markov-like stochastic process agent-based model and a “virtual farm” concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a “bottom-up ethological” approach and emulates behavior of the “primary IPM actors”—large cohorts of individual insects—within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the “virtual farm” approach—were discussed.
Lux, Slawomir A.; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin
2016-01-01
The paper reports application of a Markov-like stochastic process agent-based model and a “virtual farm” concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a “bottom-up ethological” approach and emulates behavior of the “primary IPM actors”—large cohorts of individual insects—within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the “virtual farm” approach—were discussed. PMID:27602000
NASA Astrophysics Data System (ADS)
Wright, D. B.; Smith, J. A.; Villarini, G.; Baeck, M. L.
2012-12-01
Conventional techniques for rainfall and flood frequency analysis in small watersheds involve a variety of assumptions regarding the spatial and temporal structure of extreme rainfall systems as well as how resulting runoff moves through the drainage network. These techniques were developed at a time when observational and computational resources were limited. They continue to be used in practice though their validity has not been fully examined. New observational and computational resources such as high-resolution radar rainfall estimates and distributed hydrologic models allow us to examine these assumptions and to develop alternative methods for estimating flood risk. We have developed a high-resolution (1 square km, 15-minute resolution) radar rainfall dataset for the 2001-2010 period using the Hydro-NEXRAD processing system, which has been bias corrected using a dense network of 71 rain gages in the Charlotte metropolitan area. The accuracy of the bias-corrected radar rainfall estimates compare favorably with rain gage measurements. The radar rainfall dataset is used in a stochastic storm transposition framework to estimate the frequency of extreme rainfall for urban watersheds ranging the point/radar pixel scale up to 240 square km, and can be combined with the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model to estimate flood frequency analysis. The results of these frequency analyses can be compared against the results of conventional methods such as the NOAA Atlas 14 precipitation frequency estimates and peak discharge estimates prepared by FEMA and the North Carolina state government.
[Transnasal endoscopic approaches to the cranial base].
Lysoń, Tomasz; Sieśkiewicz, Andrzej; Rutkowski, Robert; Kochanowicz, Jan; Turek, Grzegorz; Rogowski, Marek; Mariak, Zenon
2013-01-01
Recent advances in surgical endoscopy have made it possible to reach nearly the whole cranial base through a transnasal approach. These 'expanded approaches' lead to the frontal sinuses, the cribriform plate and planum sphenoidale, the suprasellar space, the clivus, odontoid and atlas. By pointing the endoscope laterally, the surgeon can explore structures in the coronal plane such as the cavernous sinuses, the pyramid and Meckel cave, the sphenopalatine and subtemporal fossae, and even the middle fossa and the orbit. The authors of this contribution use most of these approaches in their endoscopic skull base surgery. The purpose of this contribution is to review the hitherto established endoscopic approaches to the skull base and to illustrate them with photographs obtained during self-performed procedures and/or cadaver studies. PMID:23487296
Lianou, Alexandra; Koutsoumanis, Konstantinos P
2011-10-01
(-1). The stochastic modeling approach developed in this study can be useful in describing and integrating the strain variability of S. enterica growth kinetic behavior in quantitative microbiology and microbial risk assessment.
NASA Astrophysics Data System (ADS)
Lin, Y. Q.; Ren, W. X.; Fang, S. E.
2011-11-01
Although most vibration-based damage detection methods can acquire satisfactory verification on analytical or numerical structures, most of them may encounter problems when applied to real-world structures under varying environments. The damage detection methods that directly extract damage features from the periodically sampled dynamic time history response measurements are desirable but relevant research and field application verification are still lacking. In this second part of a two-part paper, the robustness and performance of the statistics-based damage index using the forward innovation model by stochastic subspace identification of a vibrating structure proposed in the first part have been investigated against two prestressed reinforced concrete (RC) beams tested in the laboratory and a full-scale RC arch bridge tested in the field under varying environments. Experimental verification is focused on temperature effects. It is demonstrated that the proposed statistics-based damage index is insensitive to temperature variations but sensitive to the structural deterioration or state alteration. This makes it possible to detect the structural damage for the real-scale structures experiencing ambient excitations and varying environmental conditions.
Stochastic model for protein flexibility analysis
NASA Astrophysics Data System (ADS)
Xia, Kelin; Wei, Guo-Wei
2013-12-01
Protein flexibility is an intrinsic property and plays a fundamental role in protein functions. Computational analysis of protein flexibility is crucial to protein function prediction, macromolecular flexible docking, and rational drug design. Most current approaches for protein flexibility analysis are based on Hamiltonian mechanics. We introduce a stochastic model to study protein flexibility. The essential idea is to analyze the free induction decay of a perturbed protein structural probability, which satisfies the master equation. The transition probability matrix is constructed by using probability density estimators including monotonically decreasing radial basis functions. We show that the proposed stochastic model gives rise to some of the best predictions of Debye-Waller factors or B factors for three sets of protein data introduced in the literature.
NASA Astrophysics Data System (ADS)
Koide, T.; Kodama, T.
2015-09-01
The stochastic variational method (SVM) is the generalization of the variational approach to systems described by stochastic variables. In this paper, we investigate the applicability of SVM as an alternative field-quantization scheme, by considering the complex Klein-Gordon equation. There, the Euler-Lagrangian equation for the stochastic field variables leads to the functional Schrödinger equation, which can be interpreted as the Euler (ideal fluid) equation in the functional space. The present formulation is a quantization scheme based on commutable variables, so that there appears no ambiguity associated with the ordering of operators, e.g., in the definition of Noether charges.
NASA Astrophysics Data System (ADS)
Zhong, Dongzhou; Luo, Wei; Xu, Geliang
2016-09-01
Using the dynamical properties of the polarization bistability that depends on the detuning of the injected light, we propose a novel approach to implement reliable all-optical stochastic logic gates in the cascaded vertical cavity surface emitting lasers (VCSELs) with optical-injection. Here, two logic inputs are encoded in the detuning of the injected light from a tunable CW laser. The logic outputs are decoded from the two orthogonal polarization lights emitted from the optically injected VCSELs. For the same logic inputs, under electro-optic modulation, we perform various digital signal processing (NOT, AND, NAND, XOR, XNOR, OR, NOR) in the all-optical domain by controlling the logic operation of the applied electric field. Also we explore their delay storages by using the mechanism of the generalized chaotic synchronization. To quantify the reliabilities of these logic gates, we further demonstrate their success probabilities. Project supported by the National Natural Science Foundation of China (Grant No. 61475120) and the Innovative Projects in Guangdong Colleges and Universities, China (Grant Nos. 2014KTSCX134 and 2015KTSCX146).
NASA Astrophysics Data System (ADS)
Basharov, A. M.
2012-09-01
It is shown that the effective Hamiltonian representation, as it is formulated in author's papers, serves as a basis for distinguishing, in a broadband environment of an open quantum system, independent noise sources that determine, in terms of the stationary quantum Wiener and Poisson processes in the Markov approximation, the effective Hamiltonian and the equation for the evolution operator of the open system and its environment. General stochastic differential equations of generalized Langevin (non-Wiener) type for the evolution operator and the kinetic equation for the density matrix of an open system are obtained, which allow one to analyze the dynamics of a wide class of localized open systems in the Markov approximation. The main distinctive features of the dynamics of open quantum systems described in this way are the stabilization of excited states with respect to collective processes and an additional frequency shift of the spectrum of the open system. As an illustration of the general approach developed, the photon dynamics in a single-mode cavity without losses on the mirrors is considered, which contains identical intracavity atoms coupled to the external vacuum electromagnetic field. For some atomic densities, the photons of the cavity mode are "locked" inside the cavity, thus exhibiting a new phenomenon of radiation trapping and non-Wiener dynamics.
NASA Astrophysics Data System (ADS)
Zhong, Dongzhou; Luo, Wei; Xu, Geliang
2016-09-01
Using the dynamical properties of the polarization bistability that depends on the detuning of the injected light, we propose a novel approach to implement reliable all-optical stochastic logic gates in the cascaded vertical cavity surface emitting lasers (VCSELs) with optical-injection. Here, two logic inputs are encoded in the detuning of the injected light from a tunable CW laser. The logic outputs are decoded from the two orthogonal polarization lights emitted from the optically injected VCSELs. For the same logic inputs, under electro-optic modulation, we perform various digital signal processing (NOT, AND, NAND, XOR, XNOR, OR, NOR) in the all-optical domain by controlling the logic operation of the applied electric field. Also we explore their delay storages by using the mechanism of the generalized chaotic synchronization. To quantify the reliabilities of these logic gates, we further demonstrate their success probabilities. Project supported by the National Natural Science Foundation of China (Grant No. 61475120) and the Innovative Projects in Guangdong Colleges and Universities, China (Grant Nos. 2014KTSCX134 and 2015KTSCX146).
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their
Guo, P.; Huang, G.H.
2010-03-15
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their
A subgrid based approach for morphodynamic modelling
NASA Astrophysics Data System (ADS)
Volp, N. D.; van Prooijen, B. C.; Pietrzak, J. D.; Stelling, G. S.
2016-07-01
To improve the accuracy and the efficiency of morphodynamic simulations, we present a subgrid based approach for a morphodynamic model. This approach is well suited for areas characterized by sub-critical flow, like in estuaries, coastal areas and in low land rivers. This new method uses a different grid resolution to compute the hydrodynamics and the morphodynamics. The hydrodynamic computations are carried out with a subgrid based, two-dimensional, depth-averaged model. This model uses a coarse computational grid in combination with a subgrid. The subgrid contains high resolution bathymetry and roughness information to compute volumes, friction and advection. The morphodynamic computations are carried out entirely on a high resolution grid, the bed grid. It is key to find a link between the information defined on the different grids in order to guaranty the feedback between the hydrodynamics and the morphodynamics. This link is made by using a new physics-based interpolation method. The method interpolates water levels and velocities from the coarse grid to the high resolution bed grid. The morphodynamic solution improves significantly when using the subgrid based method compared to a full coarse grid approach. The Exner equation is discretised with an upwind method based on the direction of the bed celerity. This ensures a stable solution for the Exner equation. By means of three examples, it is shown that the subgrid based approach offers a significant improvement at a minimal computational cost.
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-01-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses. PMID:27405788
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-01-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses. PMID:27405788
NASA Astrophysics Data System (ADS)
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
NASA Astrophysics Data System (ADS)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Blaskiewicz, M.
2011-01-01
Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.
A quasistationary analysis of a stochastic chemical reaction: Keizer's paradox.
Vellela, Melissa; Qian, Hong
2007-07-01
For a system of biochemical reactions, it is known from the work of T.G. Kurtz [J. Appl. Prob. 8, 344 (1971)] that the chemical master equation model based on a stochastic formulation approaches the deterministic model based on the Law of Mass Action in the infinite system-size limit in finite time. The two models, however, often show distinctly different steady-state behavior. To further investigate this "paradox," a comparative study of the deterministic and stochastic models of a simple autocatalytic biochemical reaction, taken from a text by the late J. Keizer, is carried out. We compute the expected time to extinction, the true stochastic steady state, and a quasistationary probability distribution in the stochastic model. We show that the stochastic model predicts the deterministic behavior on a reasonable time scale, which can be consistently obtained from both models. The transition time to the extinction, however, grows exponentially with the system size. Mathematically, we identify that exchanging the limits of infinite system size and infinite time is problematic. The appropriate system size that can be considered sufficiently large, an important parameter in numerical computation, is also discussed.
Stochastic superparameterization in quasigeostrophic turbulence
Grooms, Ian; Majda, Andrew J.
2014-08-15
In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and
Stochastic superparameterization in quasigeostrophic turbulence
NASA Astrophysics Data System (ADS)
Grooms, Ian; Majda, Andrew J.
2014-08-01
In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis' stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and
Tonini, Francesco; Hochmair, Hartwig H; Scheffrahn, Rudolf H; Deangelis, Donald L
2013-06-01
Invasive termites are destructive insect pests that cause billions of dollars in property damage every year. Termite species can be transported overseas by maritime vessels. However, only if the climatic conditions are suitable will the introduced species flourish. Models predicting the areas of infestation following initial introduction of an invasive species could help regulatory agencies develop successful early detection, quarantine, or eradication efforts. At present, no model has been developed to estimate the geographic spread of a termite infestation from a set of surveyed locations. In the current study, we used actual field data as a starting point, and relevant information on termite species to develop a spatially-explicit stochastic individual-based simulation to predict areas potentially infested by an invasive termite, Nasutitermes corniger (Motschulsky), in Dania Beach, FL. The Monte Carlo technique is used to assess outcome uncertainty. A set of model realizations describing potential areas of infestation were considered in a sensitivity analysis, which showed that the model results had greatest sensitivity to number of alates released from nest, alate survival, maximum pheromone attraction distance between heterosexual pairs, and mean flight distance. Results showed that the areas predicted as infested in all simulation runs of a baseline model cover the spatial extent of all locations recently discovered. The model presented in this study could be applied to any invasive termite species after proper calibration of parameters. The simulation herein can be used by regulatory authorities to define most probable quarantine and survey zones. PMID:23726049
NASA Astrophysics Data System (ADS)
Landrock, Clinton K.
Falls are the leading cause of all external injuries. Outcomes of falls include the leading cause of traumatic brain injury and bone fractures, and high direct medical costs in the billions of dollars. This work focused on developing three areas of enabling component technology to be used in postural control monitoring tools targeting the mitigation of falls. The first was an analysis tool based on stochastic fractal analysis to reliably measure levels of motor control. The second focus was on thin film wearable pressure sensors capable of relaying data for the first tool. The third was new thin film advanced optics for improving phototherapy devices targeting postural control disorders. Two populations, athletes and elderly, were studied against control groups. The results of these studies clearly show that monitoring postural stability in at-risk groups can be achieved reliably, and an integrated wearable system can be envisioned for both monitoring and treatment purposes. Keywords: electro-active polymer, ionic polymer-metal composite, postural control, motor control, fall prevention, sports medicine, fractal analysis, physiological signals, wearable sensors, phototherapy, photobiomodulation, nano-optics.
Tonini, Francesco; Hochmair, Hartwig H; Scheffrahn, Rudolf H; Deangelis, Donald L
2013-06-01
Invasive termites are destructive insect pests that cause billions of dollars in property damage every year. Termite species can be transported overseas by maritime vessels. However, only if the climatic conditions are suitable will the introduced species flourish. Models predicting the areas of infestation following initial introduction of an invasive species could help regulatory agencies develop successful early detection, quarantine, or eradication efforts. At present, no model has been developed to estimate the geographic spread of a termite infestation from a set of surveyed locations. In the current study, we used actual field data as a starting point, and relevant information on termite species to develop a spatially-explicit stochastic individual-based simulation to predict areas potentially infested by an invasive termite, Nasutitermes corniger (Motschulsky), in Dania Beach, FL. The Monte Carlo technique is used to assess outcome uncertainty. A set of model realizations describing potential areas of infestation were considered in a sensitivity analysis, which showed that the model results had greatest sensitivity to number of alates released from nest, alate survival, maximum pheromone attraction distance between heterosexual pairs, and mean flight distance. Results showed that the areas predicted as infested in all simulation runs of a baseline model cover the spatial extent of all locations recently discovered. The model presented in this study could be applied to any invasive termite species after proper calibration of parameters. The simulation herein can be used by regulatory authorities to define most probable quarantine and survey zones.
Ugi-based approaches to quinoxaline libraries.
Azuaje, Jhonny; El Maatougui, Abdelaziz; García-Mera, Xerardo; Sotelo, Eddy
2014-08-11
An expedient and concise Ugi-based unified approach for the rapid assembly of quinoxaline frameworks has been developed. This convergent and versatile method uses readily available commercial reagents, does not require advanced intermediates, and exhibits excellent bond-forming efficiency, thus exemplifying the operationally simple synthesis of quinoxaline libraries.
Physics-based approach to haptic display
NASA Technical Reports Server (NTRS)
Brown, J. Michael; Colgate, J. Edward
1994-01-01
This paper addresses the implementation of complex multiple degree of freedom virtual environments for haptic display. We suggest that a physics based approach to rigid body simulation is appropriate for hand tool simulation, but that currently available simulation techniques are not sufficient to guarantee successful implementation. We discuss the desirable features of a virtual environment simulation, specifically highlighting the importance of stability guarantees.
Advanced Approach of Multiagent Based Buoy Communication
Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej
2015-01-01
Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197
Network-based stochastic competitive learning approach to disambiguation in collaborative networks.
Christiano Silva, Thiago; Raphael Amancio, Diego
2013-03-01
Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods. PMID:23556976
Network-based stochastic competitive learning approach to disambiguation in collaborative networks
NASA Astrophysics Data System (ADS)
Christiano Silva, Thiago; Raphael Amancio, Diego
2013-03-01
Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods.
Effect of irrigation on the Budyko curve: a process-based stochastic approach
NASA Astrophysics Data System (ADS)
Vico, Giulia; Destouni, Georgia
2015-04-01
Currently, 40% of food production is provided by irrigated agriculture. Irrigation ensures higher and less variable yields, but such water input alters the balance of transpiration and other losses from the soil. Thus, accounting for the impact of irrigation is crucial for the understanding of the local water balance. A probabilistic model of the soil water balance is employed to explore the effects of different irrigation strategies within the Budyko framework. Shifts in the Budyko curve are explained in a mechanistic way. At the field level and assuming unlimited irrigation water, irrigation shifts the Budyko curve upward towards the upper limit imposed by energy availability, even in dry climates. At the watershed scale and assuming that irrigation water is obtained from sources within the same watershed, the application of irrigation over a fraction of the watershed area allows a more efficient use of water resources made available through precipitation. In this case, however, mean transpiration remains upper-bounded by rainfall over the whole watershed.
Network-based stochastic competitive learning approach to disambiguation in collaborative networks.
Christiano Silva, Thiago; Raphael Amancio, Diego
2013-03-01
Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods.
Ertaş, Mehmet; Deviren, Bayram; Keskin, Mustafa
2012-11-01
Nonequilibrium magnetic properties in a two-dimensional kinetic mixed spin-2 and spin-5/2 Ising system in the presence of a time-varying (sinusoidal) magnetic field are studied within the effective-field theory (EFT) with correlations. The time evolution of the system is described by using Glauber-type stochastic dynamics. The dynamic EFT equations are derived by employing the Glauber transition rates for two interpenetrating square lattices. We investigate the time dependence of the magnetizations for different interaction parameter values in order to find the phases in the system. We also study the thermal behavior of the dynamic magnetizations, the hysteresis loop area, and dynamic correlation. The dynamic phase diagrams are presented in the reduced magnetic field amplitude and reduced temperature plane and we observe that the system exhibits dynamic tricritical and reentrant behaviors. Moreover, the system also displays a double critical end point (B), a zero-temperature critical point (Z), a critical end point (E), and a triple point (TP). We also performed a comparison with the mean-field prediction in order to point out the effects of correlations and found that some of the dynamic first-order phase lines, which are artifacts of the mean-field approach, disappeared.
Stochastic analysis of transport in tubes with rough walls
Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu
2006-09-01
Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.
Stochastic Vorticity and Associated Filtering Theory
Amirdjanova, A.; Kallianpur, G.
2002-12-19
The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.
Gutiérrez, Álvaro; González, Carlos; Jiménez-Leube, Javier; Zazo, Santiago; Dopico, Nelson; Raos, Ivana
2009-01-01
The improvement in the transmission range in wireless applications without the use of batteries remains a significant challenge in identification applications. In this paper, we describe a heterogeneous wireless identification network mostly powered by kinetic energy, which allows the localization of animals in open environments. The system relies on radio communications and a global positioning system. It is made up of primary and secondary nodes. Secondary nodes are kinetic-powered and take advantage of animal movements to activate the node and transmit a specific identifier, reducing the number of batteries of the system. Primary nodes are battery-powered and gather secondary-node transmitted information to provide it, along with position and time data, to a final base station in charge of the animal monitoring. The system allows tracking based on contextual information obtained from statistical data. PMID:22412344
Stochastic Optimally Tuned Range-Separated Hybrid Density Functional Theory.
Neuhauser, Daniel; Rabani, Eran; Cytter, Yael; Baer, Roi
2016-05-19
We develop a stochastic formulation of the optimally tuned range-separated hybrid density functional theory that enables significant reduction of the computational effort and scaling of the nonlocal exchange operator at the price of introducing a controllable statistical error. Our method is based on stochastic representations of the Coulomb convolution integral and of the generalized Kohn-Sham density matrix. The computational cost of the approach is similar to that of usual Kohn-Sham density functional theory, yet it provides a much more accurate description of the quasiparticle energies for the frontier orbitals. This is illustrated for a series of silicon nanocrystals up to sizes exceeding 3000 electrons. Comparison with the stochastic GW many-body perturbation technique indicates excellent agreement for the fundamental band gap energies, good agreement for the band edge quasiparticle excitations, and very low statistical errors in the total energy for large systems. The present approach has a major advantage over one-shot GW by providing a self-consistent Hamiltonian that is central for additional postprocessing, for example, in the stochastic Bethe-Salpeter approach. PMID:26651840
Galla, Tobias; Clayton, Richard H.
2016-01-01
Models that represent the mechanisms that initiate and sustain atrial fibrillation (AF) in the heart are computationally expensive to simulate and therefore only capture short time scales of a few heart beats. It is therefore difficult to embed biophysical mechanisms into both policy-level disease models, which consider populations of patients over multiple decades, and guidelines that recommend treatment strategies for patients. The aim of this study is to link these modelling paradigms using a stylised population-level model that both represents AF progression over a long time-scale and retains a description of biophysical mechanisms. We develop a non-Markovian binary switching model incorporating three different aspects of AF progression: genetic disposition, disease/age related remodelling, and AF-related remodelling. This approach allows us to simulate individual AF episodes as well as the natural progression of AF in patients over a period of decades. Model parameters are derived, where possible, from the literature, and the model development has highlighted a need for quantitative data that describe the progression of AF in population of patients. The model produces time series data of AF episodes over the lifetimes of simulated patients. These are analysed to quantitatively describe progression of AF in terms of several underlying parameters. Overall, the model has potential to link mechanisms of AF to progression, and to be used as a tool to study clinical markers of AF or as training data for AF classification algorithms. PMID:27070920
Frame-Based Approach To Database Management
NASA Astrophysics Data System (ADS)
Voros, Robert S.; Hillman, Donald J.; Decker, D. Richard; Blank, Glenn D.
1989-03-01
Practical knowledge-based systems need to reason in terms of knowledge that is already available in databases. This type of knowledge is usually represented as tables acquired from external databases and published reports. Knowledge based systems provide a means for reasoning about entities at a higher level of abstraction. What is needed in many of today's expert systems is a link between the knowledge base and external databases. One such approach is a frame-based database management system. Package Expert (PEx) designs packages for integrated circuits. The thrust of our work is to bring together diverse technologies, data and design knowledge in a coherent system. PEx uses design rules to reason about properties of chips and potential packages, including dimensions, possible materials and packaging requirements. This information is available in existing databases. PEx needs to deal with the following types of information consistently: material databases which are in several formats; technology databases, also in several formats; and parts files which contain dimensional information. It is inefficient and inelegant to have rules access the database directly. Instead, PEx uses a frame-based hierarchical knowledge management approach to databases. Frames serve as the interface between rule-based knowledge and databases. We describe PEx and the use of frames in database retrieval. We first give an overview and the design evolution of the expert system. Next, we describe the system implementation. Finally, we describe how the rules in the expert system access the databases via frames.
A stochastic model for immunotherapy of cancer
Baar, Martina; Coquille, Loren; Mayer, Hannah; Hölzel, Michael; Rogava, Meri; Tüting, Thomas; Bovier, Anton
2016-01-01
We propose an extension of a standard stochastic individual-based model in population dynamics which broadens the range of biological applications. Our primary motivation is modelling of immunotherapy of malignant tumours. In this context the different actors, T-cells, cytokines or cancer cells, are modelled as single particles (individuals) in the stochastic system. The main expansions of the model are distinguishing cancer cells by phenotype and genotype, including environment-dependent phenotypic plasticity that does not affect the genotype, taking into account the effects of therapy and introducing a competition term which lowers the reproduction rate of an individual in addition to the usual term that increases its death rate. We illustrate the new setup by using it to model various phenomena arising in immunotherapy. Our aim is twofold: on the one hand, we show that the interplay of genetic mutations and phenotypic switches on different timescales as well as the occurrence of metastability phenomena raise new mathematical challenges. On the other hand, we argue why understanding purely stochastic events (which cannot be obtained with deterministic models) may help to understand the resistance of tumours to therapeutic approaches and may have non-trivial consequences on tumour treatment protocols. This is supported through numerical simulations. PMID:27063839
NASA Astrophysics Data System (ADS)
Lahaye, S.; Huynh, T. D.; Tsilanizara, A.
2016-03-01
Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.
Stochastic switching of TiO2-based memristive devices with identical initial memory states
2014-01-01
In this work, we show that identical TiO2-based memristive devices that possess the same initial resistive states are only phenomenologically similar as their internal structures may vary significantly, which could render quite dissimilar switching dynamics. We experimentally demonstrated that the resistive switching of practical devices with similar initial states could occur at different programming stimuli cycles. We argue that similar memory states can be transcribed via numerous distinct active core states through the dissimilar reduced TiO2-x filamentary distributions. Our hypothesis was finally verified via simulated results of the memory state evolution, by taking into account dissimilar initial filamentary distribution. PMID:24994953
NASA Astrophysics Data System (ADS)
Mahmud, K.; Mariethoz, G.; Baker, A.
2013-12-01
It has been widely demonstrated that the hydraulic conductivity of an aquifer increases with a larger portion of the aquifer tested. This poses a challenge when different hydraulic conductivity measurements coexist in a field study and have to be integrated simultaneously (e.g. core analysis, slug tests and well tests). While the scaling of hydraulic conductivity can be analytically derived in multiGaussian media, there is no general methodology to simultaneously integrate hydraulic conductivity measurements taken at different scales in highly heterogeneous media. Here we address this issue in the context of multiple-point statistics simulations (MPS). In MPS, the spatial continuity is based on a training image (TI) that contains the variability, connectivity, and structural properties of the medium. The key principle of our methodology is to consider the different scales of hydraulic conductivity as joint variables which are simulated together. Based on a TI that represents the fine-scale spatial variability, we use a classical upscaling method to obtain a series of upscaled TIs that correspond to the different scales at which measurements are available. In our case, the renormalization method is used for this upscaling step, but any upscaling method could be employed. Considered together, the different scales obtained are considered a single multi-scale representation of the initial TI, in a similar fashion as the multiscale pyramids used in image processing. We then use recent MPS simulation methods that allow dealing with multivariate TIs to generate conditional realizations of the different scales together. One characteristic of these realizations is that the possible non-linear relationships between the different simulated scales are statistically similar to the relationships observed in the multiscale TI. Therefore these relationships are considered a reasonable approximation of the renormalization results that were used on the TI. Another characteristic of
NASA Technical Reports Server (NTRS)
Thompson, J. R.; Taylor, M. S.
1982-01-01
Let X be a K-dimensional random variable serving as input for a system with output Y (not necessarily of dimension k). given X, an outcome Y or a distribution of outcomes G(Y/X) may be obtained either explicitly or implicity. The situation is considered in which there is a real world data set X sub j sub = 1 (n) and a means of simulating an outcome Y. A method for empirical random number generation based on the sample of observations of the random variable X without estimating the underlying density is discussed.
Atchley, Adam L; Maxwell, Reed M; Navarre-Sitchler, Alexis K
2013-06-01
Increased human health risk associated with groundwater contamination from potential carbon dioxide (CO2) leakage into a potable aquifer is predicted by conducting a joint uncertainty and variability (JUV) risk assessment. The approach presented here explicitly incorporates heterogeneous flow and geochemical reactive transport in an efficient manner and is used to evaluate how differences in representation of subsurface physical heterogeneity and geochemical reactions change the calculated risk for the same hypothetical aquifer scenario where a CO2 leak induces increased lead (Pb(2+)) concentrations through dissolution of galena (PbS). A nested Monte Carlo approach was used to take Pb(2+) concentrations at a well from an ensemble of numerical reactive transport simulations (uncertainty) and sample within a population of potentially exposed individuals (variability) to calculate risk as a function of both uncertainty and variability. Pb(2+) concentrations at the well were determined with numerical reactive transport simulation ensembles using a streamline technique in a heterogeneous 3D aquifer. Three ensembles with variances of log hydraulic conductivity (σ(2)lnK) of 1, 3.61, and 16 were simulated. Under the conditions simulated, calculated risk is shown to be a function of the strength of subsurface heterogeneity, σ(2)lnK and the choice between calculating Pb(2+) concentrations in groundwater using equilibrium with galena and kinetic mineral reaction rates. Calculated risk increased with an increase in σ(2)lnK of 1 to 3.61, but decreased when σ(2)lnK was increased from 3.61 to 16 for all but the highest percentiles of uncertainty. Using a Pb(2+) concentration in equilibrium with galena under CO2 leakage conditions (PCO2 = 30 bar) resulted in lower estimated risk than the simulations where Pb(2+) concentrations were calculated using kinetic mass transfer reaction rates for galena dissolution and precipitation. This study highlights the importance of
Atchley, Adam L; Maxwell, Reed M; Navarre-Sitchler, Alexis K
2013-06-01
Increased human health risk associated with groundwater contamination from potential carbon dioxide (CO2) leakage into a potable aquifer is predicted by conducting a joint uncertainty and variability (JUV) risk assessment. The approach presented here explicitly incorporates heterogeneous flow and geochemical reactive transport in an efficient manner and is used to evaluate how differences in representation of subsurface physical heterogeneity and geochemical reactions change the calculated risk for the same hypothetical aquifer scenario where a CO2 leak induces increased lead (Pb(2+)) concentrations through dissolution of galena (PbS). A nested Monte Carlo approach was used to take Pb(2+) concentrations at a well from an ensemble of numerical reactive transport simulations (uncertainty) and sample within a population of potentially exposed individuals (variability) to calculate risk as a function of both uncertainty and variability. Pb(2+) concentrations at the well were determined with numerical reactive transport simulation ensembles using a streamline technique in a heterogeneous 3D aquifer. Three ensembles with variances of log hydraulic conductivity (σ(2)lnK) of 1, 3.61, and 16 were simulated. Under the conditions simulated, calculated risk is shown to be a function of the strength of subsurface heterogeneity, σ(2)lnK and the choice between calculating Pb(2+) concentrations in groundwater using equilibrium with galena and kinetic mineral reaction rates. Calculated risk increased with an increase in σ(2)lnK of 1 to 3.61, but decreased when σ(2)lnK was increased from 3.61 to 16 for all but the highest percentiles of uncertainty. Using a Pb(2+) concentration in equilibrium with galena under CO2 leakage conditions (PCO2 = 30 bar) resulted in lower estimated risk than the simulations where Pb(2+) concentrations were calculated using kinetic mass transfer reaction rates for galena dissolution and precipitation. This study highlights the importance of
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data.
Munoz-Organero, Mario; Lotfi, Ahmad
2016-01-01
Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063
Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data
Munoz-Organero, Mario; Lotfi, Ahmad
2016-01-01
Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063
Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data.
Munoz-Organero, Mario; Lotfi, Ahmad
2016-09-09
Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented.
NASA Astrophysics Data System (ADS)
Oware, E. K.
2015-12-01
Modeling aquifer heterogeneities (AH) is a complex, multidimensional problem that mostly requires stochastic imaging strategies for tractability. While the traditional Bayesian Markov chain Monte Carlo (McMC) provides a powerful framework to model AH, the generic McMC is computationally prohibitive and, thus, unappealing for large-scale problems. An innovative variant of the McMC scheme that imposes priori spatial statistical constraints on model parameter updates, for improved characterization in a computationally efficient manner is proposed. The proposed algorithm (PA) is based on Markov random field (MRF) modeling, which is an image processing technique that infers the global behavior of a random field from its local properties, making the MRF approach well suited for imaging AH. MRF-based modeling leverages the equivalence of Gibbs (or Boltzmann) distribution (GD) and MRF to identify the local properties of an MRF in terms of the easily quantifiable Gibbs energy. The PA employs the two-step approach to model the lithological structure of the aquifer and the hydraulic properties within the identified lithologies simultaneously. It performs local Gibbs energy minimizations along a random path, which requires parameters of the GD (spatial statistics) to be specified. A PA that implicitly infers site-specific GD parameters within a Bayesian framework is also presented. The PA is illustrated with a synthetic binary facies aquifer with a lognormal heterogeneity simulated within each facies. GD parameters of 2.6, 1.2, -0.4, and -0.2 were estimated for the horizontal, vertical, NESW, and NWSE directions, respectively. Most of the high hydraulic conductivity zones (facies 2) were fairly resolved (see results below) with facies identification accuracy rate of 81%, 89%, and 90% for the inversions conditioned on concentration (R1), resistivity (R2), and joint (R3), respectively. The incorporation of the conditioning datasets improved on the root mean square error (RMSE
Systems Engineering Interfaces: A Model Based Approach
NASA Technical Reports Server (NTRS)
Fosse, Elyse; Delp, Christopher
2013-01-01
Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan
2010-01-01
For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.
Change detection methods for distinction task of stochastic textures based on nonparametric method
NASA Astrophysics Data System (ADS)
Sultanov, Albert K.
2016-03-01
The following article describes use of nonparametric method. This method is to be used to find multivariate change of random processes for image processing. It is aimed to find the borders of irregular phenomena in front of terrain. The current task of finding change and evaluation of a change point in consequent setting proposes test statistics based on value of sampling characteristic functions. The relevant criterion in a wide range of alternatives has a predetermined assessment of the asymptotic significance level. Current work also proposes an algorithm of texture segmentation for two-dimensional case. This algorithm is given as a consequence of processing operations in columns and rows of test statistics values, obtained during scanning of images. The test results are quoted.
2012-02-24
GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posterior is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.
Computationally tractable stochastic image modeling based on symmetric Markov mesh random fields.
Yousefi, Siamak; Kehtarnavaz, Nasser; Cao, Yan
2013-06-01
In this paper, the properties of a new class of causal Markov random fields, named symmetric Markov mesh random field, are initially discussed. It is shown that the symmetric Markov mesh random fields from the upper corners are equivalent to the symmetric Markov mesh random fields from the lower corners. Based on this new random field, a symmetric, corner-independent, and isotropic image model is then derived which incorporates the dependency of a pixel on all its neighbors. The introduced image model comprises the product of several local 1D density and 2D joint density functions of pixels in an image thus making it computationally tractable and practically feasible by allowing the use of histogram and joint histogram approximations to estimate the model parameters. An image restoration application is also presented to confirm the effectiveness of the model developed. The experimental results demonstrate that this new model provides an improved tool for image modeling purposes compared to the conventional Markov random field models.
The development of a stochastic physiologically-based pharmacokinetic model for lead.
Beck, B D; Mattuck, R L; Bowers, T S; Cohen, J T; O'Flaherty, E
2001-07-01
This presentation describes the development of a prototype Monte Carlo module for the physiologically-based pharmacokinetic (PBPK) model for lead, created by Dr Ellen O'Flaherty. The module uses distributions for the following: exposure parameters (soil and dust concentrations, daily soil and ingestion rate, water lead concentration, water ingestion rate, air lead concentration, inhalation rate and dietary lead intake); absoption parameters; and key pharmacokinetic parameters (red blood binding capacity and half saturation concentration). Distributions can be specified as time-invariant or can change with age. Monte Carlo model predicted blood levels were calibrated to empirically measured blood lead levels for children living in Midvale, Utah (a milling/smelting community). The calibrated model was then evaluated using blood lead data from Palmerton, Pennsylvania (a town with a former smelter) and Sandy, Utah, (a town with a former smelter and slag piles). Our initial evaluation using distributions for exposure parameters showed that the model accurately predicted geometric (GM) blood lead levels of Palmerton and Sandy and slightly over predicted the GSD. Consideration of uncertainty in red blood cell parameters substantially inflated the GM. Future model development needs to address the correlation among parameters and the use of parameters for long-term exposure derived from short-term studies.
A stochastic model for tropical cyclone tracks based on Reanalysis data and GCM output
NASA Astrophysics Data System (ADS)
Ito, K.; Nakano, S.; Ueno, G.
2014-12-01
In the present study, we try to express probability distribution of tropical cyclone (TC) trajectories estimated on the basis of GCM output. The TC tracks are mainly controlled by the atmospheric circulation such as the trade winds and the Westerlies as well as are influenced to move northward by the Beta effect. The TC tracks, which calculated with trajectory analysis, would thus correspond to the movement of TCs due to the atmospheric circulation. Comparing the result of the trajectory analysis from reanalysis data with the Best Track (BT) of TC in the present climate, the structure of the trajectory seems to be similar to the BT. However, here is a significant problem for the calculation of a trajectory in the reanalysis wind field because there are many rotation elements including TCs in the reanalysis data. We assume that a TC would move along the steering current and the rotations would not have a great influence on the direction of moving. We are designing a state-space model based on the trajectory analysis and put an adjustment parameter for the moving vector. Here, a simple track generation model is developed. This model has a possibility to gain the probability distributions of calculated TC tracks by fitting to the BT using data assimilation. This work was conducted under the framework of the "Development of Basic Technology for Risk Information on Climate Change" supported by the SOUSEI Program of the Ministry of Education, Culture, Sports, Science, and Technology.
Optimization of observation plan based on the stochastic characteristics of the geodetic network
NASA Astrophysics Data System (ADS)
Pachelski, Wojciech; Postek, Paweł
2016-06-01
Optimal design of geodetic network is a basic subject of many engineering projects. An observation plan is a concluding part of the process. Any particular observation within the network has through adjustment a different contribution and impact on values and accuracy characteristics of unknowns. The problem of optimal design can be solved by means of computer simulation. This paper presents a new method of simulation based on sequential estimation of individual observations in a step-by-step manner, by means of the so-called filtering equations. The algorithm aims at satisfying different criteria of accuracy according to various interpretations of the covariance matrix. Apart of them, the optimization criterion is also amount of effort, defined as the minimum number of observations required. A numerical example of a 2-D network is illustrated to view the effectiveness of presented method. The results show decrease of the number of observations by 66% with respect to the not optimized observation plan, which still satisfy the assumed accuracy.
Creep life prediction based on stochastic model of microstructurally short crack growth
NASA Technical Reports Server (NTRS)
Kitamura, Takayuki; Ohtani, Ryuichi
1988-01-01
A nondimensional model of microstructurally short crack growth in creep is developed based on a detailed observation of the creep fracture process of 304 stainless steel. In order to deal with the scatter of small crack growth rate data caused by microstructural inhomogeneity, a random variable technique is used in the model. A cumulative probability of the crack length at an arbitary time, G(bar a, bar t), and that of the time when a crack reaches an arbitary length, F(bar t, bar a), are obtained numerically by means of a Monte Carlo method. G(bar a, bar t), and F(bar t, bar a) are the probabilities for a single crack. However, multiple cracks generally initiate on the surface of a smooth specimen from the early stage of creep life to the final stage. TAking into account the multiple crack initiations, the actual crack length distribution observed on the surface of a specimen is predicted by the combination of probabilities for a single crack. The prediction shows a fairly good agreement with the experimental result for creep of 304 stainless steel at 923 K. The probability of creep life is obtained from an assumption that creep fracture takes place when the longest crack reaches a critical length. The observed and predicted scatter of the life is fairly small for the specimens tested.
Creep life prediction based on stochastic model of microstructurally short crack growth
NASA Technical Reports Server (NTRS)
Kitamura, Takayuki; Ohtani, Ryuichi
1989-01-01
A nondimensional model of microstructurally short crack growth in creep is developed based on a detailed observation of the creep fracture process of 304 stainless steel. In order to deal with the scatter of small crack growth rate data caused by microstructural inhomogeneity, a random variable technique is used in the model. A cumulative probability of the crack length at an arbitrary time, G(bar a, bar t), and that of the time when a crack reaches an arbitrary length, F(bar t, bar a), are obtained numerically by means of a Monte Carlo method. G(bar a, bar t), and F(bar t, bar a) are the probabilities for a single crack. However, multiple cracks generally initiate on the surface of a smooth specimen from the early stage of creep life to the final stage. Taking into account the multiple crack initiations, the actual crack length distribution observed on the surface of a specimen is predicted by the combination of probabilities for a single crack. The prediction shows a fairly good agreement with the experimental result for creep of 304 stainless steel at 923 K. The probability of creep life is obtained from an assumption that creep fracture takes place when the longest crack reaches a critical length. The observed and predicted scatter of the life is fairly small for the specimens tested.
Kheirollahi, Hooshang; Matin, Behzad Karami; Mahboubi, Mohammad; Alavijeh, Mehdi Mirzaei
2015-01-01
This article developed an approached model of congestion, based on relaxed combination of inputs, in stochastic data envelopment analysis (SDEA) with chance constrained programming approaches. Classic data envelopment analysis models with deterministic data have been used by many authors to identify congestion and estimate its levels; however, data envelopment analysis with stochastic data were rarely used to identify congestion. This article used chance constrained programming approaches to replace stochastic models with "deterministic equivalents". This substitution leads us to non-linear problems that should be solved. Finally, the proposed method based on relaxed combination of inputs was used to identify congestion input in six Iranian hospital with one input and two outputs in the period of 2009 to 2012.
NASA Astrophysics Data System (ADS)
Vervatis, V.; Testut, C. E.; De Mey, P.; Ayoub, N.; Chanut, J.; Quattrocchi, G.
2016-04-01
A twin-experiment is carried out introducing elements of an Ensemble Kalman Filter (EnKF), to assess and correct ocean uncertainties in a high-resolution Bay of Biscay configuration. Initially, an ensemble of 102 members is performed by applying stochastic modeling of the wind forcing. The target of this step is to simulate the envelope of possible realizations and to explore the robustness of the method at building ensemble covariances. Our second step includes the integration of the ensemble-based error estimates into a data assimilative system adopting a 4D Ensemble Optimal Interpolation (4DEnOI) approach. In the twin-experiment context, synthetic observations are simulated from a perturbed member not used in the subsequent analyses, satisfying the condition of an unbiased probability distribution function against the ensemble by performing a rank histogram. We evaluate the assimilation performance on short-term predictability focusing on the ensemble size, the observational network, and the enrichment of the ensemble by inexpensive time-lagged techniques. The results show that variations in performance are linked to intrinsic oceanic processes, such as the spring shoaling of the thermocline, in combination with external forcing modulated by river runoffs and time-variable wind patterns, constantly reshaping the error regimes. Ensemble covariances are able to capture high-frequency processes associated with coastal density fronts, slope currents and upwelling events near the Armorican and Galician shelf break. Further improvement is gained when enriching model covariances by including pattern phase errors, with the help of time-neighbor states augmenting the ensemble spread.
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-01-01
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor. PMID:27346701
NASA Astrophysics Data System (ADS)
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-01
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-27
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-01-01
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor. PMID:27346701
Matched filter based iterative adaptive approach
NASA Astrophysics Data System (ADS)
Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William
2016-05-01
Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.
Electrochemical Approaches to Aptamer-Based Sensing
NASA Astrophysics Data System (ADS)
Xiao, Yi; Plaxco, Kevin W.
Motivated by the potential convenience of electronic detection, a wide range of electrochemical, aptamer-based sensors have been reported since the first was described only in 2005. Although many of these are simply electrochemical, aptamer-based equivalents of traditional immunochemical approaches (e.g., sandwich and competition assays employing electroactive signaling moieties), others exploit the unusual physical properties of aptamers, properties that render them uniquely well suited for application to impedance and folding-based electrochemical sensors. In particular, the ability of electrode-bound aptamers to undergo reversible, binding-induced folding provides a robust, reagentless means of transducing target binding into an electronic signal that is largely impervious to nonspecific signals arising from contaminants. This capability enables the direct detection of specific proteins at physiologically relevant, picomolar concentrations in blood serum and other complex, contaminant-ridden sample matrices.
A Kalman-Filter-Based Approach to Combining Independent Earth-Orientation Series
NASA Technical Reports Server (NTRS)
Gross, Richard S.; Eubanks, T. M.; Steppe, J. A.; Freedman, A. P.; Dickey, J. O.; Runge, T. F.
1998-01-01
An approach. based upon the use of a Kalman filter. that is currently employed at the Jet Propulsion Laboratory (JPL) for combining independent measurements of the Earth's orientation, is presented. Since changes in the Earth's orientation can be described is a randomly excited stochastic process, the uncertainty in our knowledge of the Earth's orientation grows rapidly in the absence of measurements. The Kalman-filter methodology allows for an objective accounting of this uncertainty growth, thereby facilitating the intercomparison of measurements taken at different epochs (not necessarily uniformly spaced in time) and with different precision. As an example of this approach to combining Earth-orientation series, a description is given of a combination, SPACE95, that has been generated recently at JPL.
Stochastic inversion by ray continuation
Haas, A.; Viallix
1989-05-01
The conventional tomographic inversion consists in minimizing residuals between measured and modelled traveltimes. The process tends to be unstable and some additional constraints are required to stabilize it. The stochastic formulation generalizes the technique and sets it on firmer theoretical bases. The Stochastic Inversion by Ray Continuation (SIRC) is a probabilistic approach, which takes a priori geological information into account and uses probability distributions to characterize data correlations and errors. It makes it possible to tie uncertainties to the results. The estimated parameters are interval velocities and B-spline coefficients used to represent smoothed interfaces. Ray tracing is done by a continuation technique between source and receives. The ray coordinates are computed from one path to the next by solving a linear system derived from Fermat's principle. The main advantages are fast computations, accurate traveltimes and derivatives. The seismic traces are gathered in CMPs. For a particular CMP, several reflecting elements are characterized by their time gradient measured on the stacked section, and related to a mean emergence direction. The program capabilities are tested on a synthetic example as well as on a field example. The strategy consists in inverting the parameters for one layer, then for the next one down. An inversion step is divided in two parts. First the parameters for the layer concerned are inverted, while the parameters for the upper layers remain fixed. Then all the parameters are reinverted. The velocity-depth section computed by the program together with the corresponding errors can be used directly for the interpretation, as an initial model for depth migration or for the complete inversion program under development.
Hybrid stochastic simulations of intracellular reaction-diffusion systems
Kalantzis, Georgios
2009-01-01
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a novel hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed a dynamic partitioning strategy using fractional propensities. In that way processes with high frequency are simulated mostly with deterministic rate-based equations, and those with low frequency mostly with the exact stochastic algorithm of Gillespie. In this way we preserve the stochastic behavior of cellular pathways while being able to apply it to large populations of molecules. In this article we describe this hybrid algorithmic approach, and we demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinet