Solving Stochastic Flexible Flow Shop Scheduling Problems with a Decomposition-Based Approach
NASA Astrophysics Data System (ADS)
Wang, K.; Choi, S. H.
2010-06-01
Real manufacturing is dynamic and tends to suffer a lot of uncertainties. Research on production scheduling under uncertainty has recently received much attention. Although various approaches have been developed for scheduling under uncertainty, this problem is still difficult to tackle by any single approach, because of its inherent difficulties. This chapter describes a decomposition-based approach (DBA) for makespan minimisation of a flexible flow shop (FFS) scheduling problem with stochastic processing times. The DBA decomposes an FFS into several machine clusters which can be solved more easily by different approaches. A neighbouring K-means clustering algorithm is developed to firstly group the machines of an FFS into an appropriate number of machine clusters, based on a weighted cluster validity index. A back propagation network (BPN) is then adopted to assign either the Shortest Processing Time (SPT) Algorithm or the Genetic Algorithm (GA) to generate a sub-schedule for each machine cluster. After machine grouping and approach assignment, an overall schedule is generated by integrating the sub-schedules of the machine clusters. Computation results reveal that the DBA is superior to SPT and GA alone for FFS scheduling under stochastic processing times, and that it can be easily adapted to schedule FFS under other uncertainties.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097
NASA Astrophysics Data System (ADS)
Xu, Mingdong; Wu, Fan; Leung, Henry
2009-09-01
Based on the stochastic delay differential equation (SDDE) modeling of neural networks, we propose an effective signal transmission approach along the neurons in such a network. Utilizing the linear relationship between the delay time and the variance of the SDDE system output, the transmitting side encodes a message as a modulation of the delay time and the receiving end decodes the message by tracking the delay time, which is equivalent to estimating the variance of the received signal. This signal transmission approach turns out to follow the principle of the spread spectrum technique used in wireless and wireline wideband communications but in the analog domain rather than digital. We hope the proposed method might help to explain some activities in biological systems. The idea can further be extended to engineering applications. The error performance of the communication scheme is also evaluated here.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Holistic irrigation water management approach based on stochastic soil water dynamics
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly
Sensitivity of Base-Isolated Systems to Ground Motion Characteristics: A Stochastic Approach
Kaya, Yavuz; Safak, Erdal
2008-07-08
Base isolators dissipate energy through their nonlinear behavior when subjected to earthquake-induced loads. A widely used base isolation system for structures involves installing lead-rubber bearings (LRB) at the foundation level. The force-deformation behavior of LRB isolators can be modeled by a bilinear hysteretic model. This paper investigates the effects of ground motion characteristics on the response of bilinear hysteretic oscillators by using a stochastic approach. Ground shaking is characterized by its power spectral density function (PSDF), which includes corner frequency, seismic moment, moment magnitude, and site effects as its parameters. The PSDF of the oscillator response is calculated by using the equivalent-linearization techniques of random vibration theory for hysteretic nonlinear systems. Knowing the PSDF of the response, we can calculate the mean square and the expected maximum response spectra for a range of natural periods and ductility values. The results show that moment magnitude is a critical factor determining the response. Site effects do not seem to have a significant influence.
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions. PMID:25974471
Stochastic approach to equilibrium and nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Tomé, Tânia; de Oliveira, Mário J.
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
NASA Astrophysics Data System (ADS)
Panu, U. S.; Ng, W.; Rasmussen, P. F.
2009-12-01
The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in
NASA Astrophysics Data System (ADS)
Wang, Y. Y.; Huang, G. H.; Wang, S.; Li, W.; Guan, P. B.
2016-08-01
In this study, a risk-based interactive multi-stage stochastic programming (RIMSP) approach is proposed through incorporating the fractile criterion method and chance-constrained programming within a multi-stage decision-making framework. RIMSP is able to deal with dual uncertainties expressed as random boundary intervals that exist in the objective function and constraints. Moreover, RIMSP is capable of reflecting dynamics of uncertainties, as well as the trade-off between the total net benefit and the associated risk. A water allocation problem is used to illustrate applicability of the proposed methodology. A set of decision alternatives with different combinations of risk levels applied to the objective function and constraints can be generated for planning the water resources allocation system. The results can help decision makers examine potential interactions between risks related to the stochastic objective function and constraints. Furthermore, a number of solutions can be obtained under different water policy scenarios, which are useful for decision makers to formulate an appropriate policy under uncertainty. The performance of RIMSP is analyzed and compared with an inexact multi-stage stochastic programming (IMSP) method. Results of comparison experiment indicate that RIMSP is able to provide more robust water management alternatives with less system risks in comparison with IMSP.
Bernard, Kévin; Tarabalka, Yuliya; Angulo, Jesús; Chanussot, Jocelyn; Benediktsson, Jón Atli
2012-04-01
In this paper, a new method for supervised hyperspectral data classification is proposed. In particular, the notion of stochastic minimum spanning forest (MSF) is introduced. For a given hyperspectral image, a pixelwise classification is first performed. From this classification map, M marker maps are generated by randomly selecting pixels and labeling them as markers for the construction of MSFs. The next step consists in building an MSF from each of the M marker maps. Finally, all the M realizations are aggregated with a maximum vote decision rule in order to build the final classification map. The proposed method is tested on three different data sets of hyperspectral airborne images with different resolutions and contexts. The influences of the number of markers and of the number of realizations M on the results are investigated in experiments. The performance of the proposed method is compared to several classification techniques (both pixelwise and spectral-spatial) using standard quantitative criteria and visual qualitative evaluation. PMID:22086502
Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system
NASA Astrophysics Data System (ADS)
Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng
2016-03-01
Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction.
Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system.
Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng
2016-01-01
Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction. PMID:26940002
Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system
Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng
2016-01-01
Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction. PMID:26940002
NASA Astrophysics Data System (ADS)
Kloss, S.; Schütze, N.; Walser, S.; Grundmann, J.
2012-04-01
In arid and semi-arid regions where water is scarce, farmers heavily rely on irrigation in order to grow crops and to produce agricultural commodities. The variable and often severely limited water supply thereby poses a serious challenge for farmers to cope with and demand sophisticated irrigation strategies that allow an efficient management of the available water resources. The general aim is to increase water productivity (WP) and one of these strategies to achieve this goal is controlled deficit irrigation (CDI). One way to realize CDI is by defining soil water status specific threshold values (either in soil tension or moisture) at which irrigation cycles are triggered. When utilizing CDI, irrigation control is of utmost importance and yet thresholds are likely chosen by trial and error and thus unreliable. Hence, for CDI to be effective systematic investigations for deriving reliable threshold values that account for different CDI strategies are needed. In this contribution, a method is presented that uses a simulation-based stochastic approach for estimating threshold values with a high reliability. The approach consist of a weather generator offering statistical significance to site-specific climate series, an optimization algorithm that determines optimal threshold values under limiting waters supply, and a crop model for simulating plant growth and water consumption. The study focuses on threshold values of soil tension for different CDI strategies. The advantage of soil-tension-based threshold values over soil-moisture-based lies in their universal and soil type independent applicability. The investigated CDI strategies comprised schedules of constant threshold values, crop development stage dependent threshold values, and different minimum irrigation intervals. For practical reasons, fixed irrigation schedules were tested as well. Additionally, a full irrigation schedule served as reference. The obtained threshold values were then tested in field
Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography
2013-01-01
Background The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. Methods In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young’s modulus are computed simultaneously through an extended Kalman filter (EKF). Results In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young’s modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. Conclusions The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model
Stochastic resonance: A residence time approach
Gammaitoni, L. |; Marchesoni, F. |; Menichella Saetta, E.; Santucci, S.
1996-06-01
The Stochastic Resonance phenomenon is described as a synchronization process between periodic signals and the random response in bistable systems. The residence time approach as a useful tool in characterizing hidden periodicities is discussed. {copyright} {ital 1996 American Institute of Physics.}
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Variational approach to stochastic nonlinear problems
Phythian, R.; Curtis, W.D.
1986-03-01
A variational principle is formulated which enables the mean value and higher moments of the solution of a stochastic nonlinear differential equation to be expressed as stationary values of certain quantities. Approximations are generated by using suitable trial functions in this variational principle and some of these are investigated numerically for the case of a Bernoulli oscillator driven by white noise. Comparison with exact data available for this system show that the variational approach to such problems can be quite effective.
Fuzzy stochastic elements method. Spectral approach
NASA Astrophysics Data System (ADS)
Sniady, Pawel; Mazur-Sniady, Krystyna; Sieniawska, Roza; Zukowski, Stanislaw
2013-05-01
We study a complex dynamic problem, which concerns a structure with uncertain parameters subjected to a stochastic excitation. Formulation of such a problem introduces fuzzy random variables for parameters of the structure and fuzzy stochastic processes for the load process. The uncertainty has two sources, namely the randomness of structural parameters such as geometry characteristics, material and damping properties, load process and imprecision of the theoretical model and incomplete information or uncertain data. All of these have a great influence on the response of the structure. By analyzing such problems we describe the random variability using the probability theory and the imprecision by use of fuzzy sets. Due to the fact that it is difficult to find an analytic expression for the inversion of the stochastic operator in the stochastic differential equation, a number of approximate methods have been proposed in the literature which can be connected to the finite element method. To evaluate the effects of excitation in the frequency domain we use the spectral density function. The spectral analysis is widely used in stochastic dynamics field of linear systems for stationary random excitation. The concept of the evolutionary spectral density is used in the case of non-stationary random excitation. We solve the considered problem using fuzzy stochastic finite element method. The solution is based on the idea of a fuzzy random frequency response vector for stationary input excitation and a transient fuzzy random frequency response vector for the fuzzy non-stationary one. We use the fuzzy random frequency response vector and the transient fuzzy random frequency response vector in the context of spectral analysis in order to determine the influence of structural uncertainty on the fuzzy random response of the structure. We study a linear system with random parameters subjected to two particular cases of stochastic excitation in a frequency domain. The first one
NASA Astrophysics Data System (ADS)
Haruna, T.; Nakajima, K.
2013-06-01
The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.
A stochastic approach to robust broadband structural control
NASA Technical Reports Server (NTRS)
Macmartin, Douglas G.; Hall, Steven R.
1992-01-01
Viewgraphs on a stochastic approach to robust broadband structural control are presented. Topics covered include: travelling wave model; dereverberated mobility model; computation of dereverberated mobility; power flow; impedance matching; stochastic systems; control problem; control of stochastic systems; using cost functional; Bernoulli-Euler beam example; compensator design; 'power' dual variables; dereverberation of complex structure; and dereverberated transfer function.
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. PMID:24290145
Multiscale stochastic approach for phase screens synthesis.
Beghi, Alessandro; Cenedese, Angelo; Masiero, Andrea
2011-07-20
Simulating the turbulence effect on ground telescope observations is of fundamental importance for the design and test of suitable control algorithms for adaptive optics systems. In this paper we propose a multiscale approach for efficiently synthesizing turbulent phases at very high resolution. First, the turbulence is simulated at low resolution, taking advantage of a previously developed method for generating phase screens [J. Opt. Soc. Am. A 25, 515 (2008)]. Then, high-resolution phase screens are obtained as the output of a multiscale linear stochastic system. The multiscale approach significantly improves the computational efficiency of turbulence simulation with respect to recently developed methods [Opt. Express 14, 988 (2006)] [J. Opt. Soc. Am. A 25, 515 (2008)] [J. Opt. Soc. Am. A 25, 463 (2008)]. Furthermore, the proposed procedure ensures good accuracy in reproducing the statistical characteristics of the turbulent phase. PMID:21772400
Optimality of collective choices: a stochastic approach.
Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L
2003-09-01
Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251
NASA Astrophysics Data System (ADS)
Oprisan, Sorinel Adrian
2001-11-01
There has been increased theoretical and experimental research interest in autonomous mobile robots exhibiting cooperative behaviour. This paper provides consistent quantitative measures of organizational degree of a two-dimensional environment. We proved, by the way of numerical simulations, that the theoretically derived values of the feature are reliable measures of aggregation degree. The slope of the feature's dependence on memory radius leads to an optimization criterion for stochastic functional self-organization. We also described the intellectual heritages that have guided our research, as well as possible future developments.
Stochastic realization approach to the efficient simulation of phase screens.
Beghi, Alessandro; Cenedese, Angelo; Masiero, Andrea
2008-02-01
The phase screen method is a well-established approach to take into account the effects of atmospheric turbulence in astronomical seeing. This is of key importance in designing adaptive optics for new-generation telescopes, in particular in view of applications such as exoplanet detection or long-exposure spectroscopy. We present an innovative approach to simulate turbulent phase that is based on stochastic realization theory. The method shows appealing properties in terms of both accuracy in reconstructing the structure function and compactness of the representation. PMID:18246185
Geometrically consistent approach to stochastic DBI inflation
Lorenz, Larissa; Martin, Jerome; Yokoyama, Jun'ichi
2010-07-15
Stochastic effects during inflation can be addressed by averaging the quantum inflaton field over Hubble-patch-sized domains. The averaged field then obeys a Langevin-type equation into which short-scale fluctuations enter as a noise term. We solve the Langevin equation for an inflaton field with a Dirac-Born-Infeld (DBI) kinetic term perturbatively in the noise and use the result to determine the field value's probability density function (PDF). In this calculation, both the shape of the potential and the warp factor are arbitrary functions, and the PDF is obtained with and without volume effects due to the finite size of the averaging domain. DBI kinetic terms typically arise in string-inspired inflationary scenarios in which the scalar field is associated with some distance within the (compact) extra dimensions. The inflaton's accessible range of field values therefore is limited because of the extra dimensions' finite size. We argue that in a consistent stochastic approach the inflaton's PDF must vanish for geometrically forbidden field values. We propose to implement these extra-dimensional spatial restrictions into the PDF by installing absorbing (or reflecting) walls at the respective boundaries in field space. As a toy model, we consider a DBI inflaton between two absorbing walls and use the method of images to determine its most general PDF. The resulting PDF is studied in detail for the example of a quartic warp factor and a chaotic inflaton potential. The presence of the walls is shown to affect the inflaton trajectory for a given set of parameters.
Stochastic approach to flat direction during inflation
Kawasaki, Masahiro; Takesako, Tomohiro E-mail: takesako@icrr.u-tokyo.ac.jp
2012-08-01
We revisit the time evolution of a flat and non-flat direction system during inflation. In order to take into account quantum noises in the analysis, we base on stochastic formalism and solve coupled Langevin equations numerically. We focus on a class of models in which tree-level Hubble-induced mass is not generated. Although the non-flat directions can block the growth of the flat direction's variance in principle, the blocking effects are suppressed by the effective masses of the non-flat directions. We find that the fate of the flat direction during inflation is determined by one-loop radiative corrections and non-renormalizable terms as usually considered, if we remove the zero-point fluctuation from the noise terms.
Symmetries of stochastic differential equations: A geometric approach
NASA Astrophysics Data System (ADS)
De Vecchi, Francesco C.; Morando, Paola; Ugolini, Stefania
2016-06-01
A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.
Two Different Approaches to Nonzero-Sum Stochastic Differential Games
Rainer, Catherine
2007-06-15
We make the link between two approaches to Nash equilibria for nonzero-sum stochastic differential games: the first one using backward stochastic differential equations and the second one using strategies with delay. We prove that, when both exist, the two notions of Nash equilibria coincide.
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Takayabu, Izuru; Dairaku, Koji; Kusaka, Hiroyuki; Nishimori, Motoki; Sakurai, Gen; Ishizaki, Noriko N.; Adachi, Sachiho A.; Semenov, Mikhail A.
2012-06-01
This study proposes the stochastic weather generator (WG)-based bootstrap approach to provide the probabilistic climate change information on mean precipitation as well as extremes, which applies a WG (i.e., LARS-WG) to daily precipitation under the present-day and future climate conditions derived from dynamical and statistical downscaling models. Additionally, the study intercompares the precipitation change scenarios derived from the multimodel ensemble for Japan focusing on five precipitation indices (mean precipitation, MEA; number of wet days, FRE; mean precipitation amount per wet day, INT; maximum number of consecutive dry days, CDD; and 90th percentile value of daily precipitation amount in wet days, Q90). Three regional climate models (RCMs: NHRCM, NRAMS and TWRF) are nested into the high-resolution atmosphere-ocean coupled general circulation model (MIROC3.2HI AOGCM) for A1B emission scenario. LARS-WG is validated and used to generate 2000 years of daily precipitation from sets of grid-specific parameters derived from the 20-year simulations from the RCMs and statistical downscaling model (SDM: CDFDM). Then 100 samples of the 20-year of continuous precipitation series are resampled, and mean values of precipitation indices are computed, which represents the randomness inherent in daily precipitation data. Based on these samples, the probabilities of change in the indices and the joint occurrence probability of extremes (CDD and Q90) are computed. High probabilities are found for the increases in heavy precipitation amount in spring and summer and elongated consecutive dry days in winter over Japan in the period 2081-2100, relative to 1981-2000. The joint probability increases in most areas throughout the year, suggesting higher potential risk of droughts and excess water-related disasters (e.g., floods) in a 20 year period in the future. The proposed approach offers more flexible way in estimating probabilities of multiple types of precipitation extremes
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.
2012-05-01
Estimation of the three-dimensional (3-D) distribution of hydrologic properties and related uncertainty is a key for improved predictions of hydrologic processes in the subsurface. However it is difficult to gain high-quality and high-density hydrologic information from the subsurface. In this regard a promising strategy is to use high-resolution geophysical data (that are relatively sensitive to variations of a hydrologic parameter of interest) to supplement direct hydrologic information from measurements in wells (e.g., logs, vertical profiles) and then generate stochastic simulations of the distribution of the hydrologic property conditioned on the hydrologic and geophysical data. In this study we develop and apply this strategy for a 3-D field experiment in the heterogeneous aquifer at the Boise Hydrogeophysical Research Site and we evaluate how much benefit the geophysical data provide. We run high-resolution 3-D conditional simulations of porosity with both simulated-annealing-based and Bayesian sequential approaches using information from multiple intersecting crosshole gound-penetrating radar (GPR) velocity tomograms and neutron porosity logs. The benefit of using GPR data is assessed by investigating their ability, when included in conditional simulation, to predict porosity log data withheld from the simulation. Results show that the use of crosshole GPR data can significantly improve the estimation of porosity spatial distribution and reduce associated uncertainty compared to using only well log measurements for the estimation. The amount of benefit depends primarily on the strength of the petrophysical relation between the GPR and porosity data, the variability of this relation throughout the investigated site, and lateral structural continuity at the site.
NASA Astrophysics Data System (ADS)
Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi
2013-04-01
modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by
NASA Astrophysics Data System (ADS)
Zhang, Sumei; Wang, Lihe
2013-07-01
This study proposes a pricing model through allowing for stochastic interest rate and stochastic volatility in the double exponential jump-diffusion setting. The characteristic function of the proposed model is then derived. Fast numerical solutions for European call and put options pricing based on characteristic function and fast Fourier transform (FFT) technique are developed. Simulations show that our numerical technique is accurate, fast and easy to implement, the proposed model is suitable for modeling long-time real-market changes. The model and the proposed option pricing method are useful for empirical analysis of asset returns and risk management in firms.
Stochastic approach to the molecular counting problem in superresolution microscopy
Rollins, Geoffrey C.; Shin, Jae Yen; Bustamante, Carlos; Pressé, Steve
2015-01-01
Superresolution imaging methods—now widely used to characterize biological structures below the diffraction limit—are poised to reveal in quantitative detail the stoichiometry of protein complexes in living cells. In practice, the photophysical properties of the fluorophores used as tags in superresolution methods have posed a severe theoretical challenge toward achieving this goal. Here we develop a stochastic approach to enumerate fluorophores in a diffraction-limited area measured by superresolution microscopy. The method is a generalization of aggregated Markov methods developed in the ion channel literature for studying gating dynamics. We show that the method accurately and precisely enumerates fluorophores in simulated data while simultaneously determining the kinetic rates that govern the stochastic photophysics of the fluorophores to improve the prediction’s accuracy. This stochastic method overcomes several critical limitations of temporal thresholding methods. PMID:25535361
Langevin equation approach to reactor noise analysis: stochastic transport equation
Akcasu, A.Z. ); Stolle, A.M. )
1993-01-01
The application of the Langevin equation method to the study of fluctuations in the space- and velocity-dependent neutron density as well as in the detector outputs in nuclear reactors is presented. In this case, the Langevin equation is the stochastic linear neutron transport equation with a space- and velocity-dependent random neutron source, often referred to as the noise equivalent source (NES). The power spectral densities (PSDs) of the NESs in the transport equation, as well as in the accompanying detection rate equations, are obtained, and the cross- and auto-power spectral densities of the outputs of pairs of detectors are explicitly calculated. The transport-level expression for the R([omega]) ratio measured in the [sup 252]Cf source-driven noise analysis method is also derived. Finally, the implementation of the Langevin equation approach at different levels of approximation is discussed, and the stochastic one-speed transport and one-group P[sub 1] equations are derived by first integrating the stochastic transport equation over speed and then eliminating the angular dependence by a spherical harmonics expansion. By taking the large transport rate limit in the P[sub 1] description, the stochastic diffusion equation is obtained as well as the PSD of the NES in it. This procedure also leads directly to the stochastic Fick's law.
NASA Astrophysics Data System (ADS)
Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi
2013-04-01
modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by
Computational approaches to stochastic systems in physics and biology
NASA Astrophysics Data System (ADS)
Jeraldo Maldonado, Patricio Rodrigo
In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the
A probabilistic graphical model based stochastic input model construction
Wan, Jiang; Zabaras, Nicholas
2014-09-01
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.
Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach
Ma, Xiao; Dong, Jin; Djouadi, Seddik M; Nutaro, James J; Kuruganti, Teja
2015-01-01
The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.
Unstable infinite nuclear matter in stochastic mean field approach
Colonna, M.; Chomaz, P. Laboratorio Nazionale del Sud, Viale Andrea Doria, Catania )
1994-04-01
In this article, we consider a semiclassical stochastic mean-field approach. In the case of unstable infinite nuclear matter, we calculate the characteristic time of the exponential growing of fluctuations and the diffusion coefficients associated to the unstable modes, in the framework of the Boltzmann-Langevin theory. These two quantities are essential to describe the dynamics of fluctuations and instabilities since, in the unstable regions, the evolution of the system will be dominated by the amplification of fluctuations. In order to make realistic 3D calculations feasible, we suggest to replace the complicated Boltzmann-Langevin theory by a simpler stochastic mean-field approach corresponding to a standard Boltzmann evolution, complemented by a simple noise chosen to reproduce the dynamics of the most unstable modes. Finally we explain how to approximately implement this method by simply tuning the noise associated to the use of a finite number of test particles in Boltzman-like calculations.
Martingale Approach to Stochastic Control with Discretionary Stopping
Karatzas, Ioannis Zamfirescu, Ingrid-Mona
2006-03-15
We develop a martingale approach for continuous-time stochastic control with discretionary stopping. The relevant Dynamic Programming Equation and Maximum Principle are presented. Necessary and sufficient conditions are provided for the optimality of a control strategy; these are analogues of the 'equalization' and 'thriftiness' conditions introduced by Dubins and Savage (1976) in a related, discrete-time context. The existence of a thrifty control strategy is established.
Implications of a stochastic approach to air-quality regulations
Witten, A.J.; Kornegay, F.C.; Hunsaker, D.B. Jr.; Long, E.C. Jr.; Sharp, R.D.; Walsh, P.J.; Zeighami, E.A.; Gordon, J.S.; Lin, W.L.
1982-09-01
This study explores the viability of a stochastic approach to air quality regulations. The stochastic approach considered here is one which incorporates the variability which exists in sulfur dioxide (SO/sub 2/) emissions from coal-fired power plants. Emission variability arises from a combination of many factors including variability in the composition of as-received coal such as sulfur content, moisture content, ash content, and heating value, as well as variability which is introduced in power plant operations. The stochastic approach as conceived in this study addresses variability by taking the SO/sub 2/ emission rate to be a random variable with specified statistics. Given the statistical description of the emission rate and known meteorological conditions, it is possible to predict the probability of a facility exceeding a specified emission limit or violating an established air quality standard. This study also investigates the implications of accounting for emissions variability by allowing compliance to be interpreted as an allowable probability of occurrence of given events. For example, compliance with an emission limit could be defined as the probability of exceeding a specified emission value, such as 1.2 lbs SO/sub 2//MMBtu, being less than 1%. In contrast, compliance is currently taken to mean that this limit shall never be exceeded, i.e., no exceedance probability is allowed. The focus of this study is on the economic benefits offered to facilities through the greater flexibility of the stochastic approach as compared with possible changes in air quality and health effects which could result.
A Spatial Clustering Approach for Stochastic Fracture Network Modelling
NASA Astrophysics Data System (ADS)
Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.
2014-07-01
Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach
Stochastic model updating utilizing Bayesian approach and Gaussian process model
NASA Astrophysics Data System (ADS)
Wan, Hua-Ping; Ren, Wei-Xin
2016-03-01
Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.
A new approach to the group analysis of one-dimensional stochastic differential equations
NASA Astrophysics Data System (ADS)
Abdullin, M. A.; Meleshko, S. V.; Nasyrov, F. S.
2014-03-01
Stochastic evolution equations are investigated using a new approach to the group analysis of stochastic differential equations. It is shown that the proposed approach reduces the problem of group analysis for this type of equations to the same problem of group analysis for evolution equations of special form without stochastic integrals.
Webster, Clayton G; Gunzburger, Max D
2013-01-01
We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical
NASA Astrophysics Data System (ADS)
Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris
2016-04-01
Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.
Barkhausen discontinuities and hysteresis of ferromagnetics: New stochastic approach
Vengrinovich, Valeriy
2014-02-18
The magnetization of ferromagnetic material is considered as periodically inhomogeneous Markov process. The theory assumes both statistically independent and correlated Barkhausen discontinuities. The model, based on the chain evolution-type process theory, assumes that the domain structure of a ferromagnet passes successively the steps of: linear growing, exponential acceleration and domains annihilation to zero density at magnetic saturation. The solution of stochastic differential Kolmogorov equation enables the hysteresis loop calculus.
A Stochastic Differential Equation Approach To Multiphase Flow In Porous Media
NASA Astrophysics Data System (ADS)
Dean, D.; Russell, T.
2003-12-01
The motivation for using stochastic differential equations in multiphase flow systems stems from our work in developing an upscaling methodology for single phase flow. The long term goals of this project include: I. Extending this work to a nonlinear upscaling methodology II. Developing a macro-scale stochastic theory of multiphase flow and transport that accounts for micro-scale heterogeneities and interfaces. In this talk, we present a stochastic differential equation approach to multiphase flow, a typical example of which is flow in the unsaturated domain. Specifically, a two phase problem is studied which consists of a wetting phase and a non-wetting phase. The approach given results in a nonlinear stochastic differential equation describing the position of the non-wetting phase fluid particle. Our fundamental assumption is that the flow of fluid particles is described by a stochastic process and that the positions of the fluid particles over time are governed by the law of the process. It is this law which we seek to determine. The nonlinearity in the stochastic differential equation arises because both the drift and diffusion coefficients depend on the volumetric fraction of the phase which in turn depends on the position of the fluid particles in the experimental domain. The concept of a fluid particle is central to the development of the model described in this talk. Expressions for both saturation and volumetric fraction are developed using the fluid particle concept. Darcy's law and the continuity equation are then used to derive a Fokker-Planck equation using these expressions. The Ito calculus is then applied to derive a stochastic differential equation for the non-wetting phase. This equation has both drift and diffusion terms which depend on the volumetric fraction of the non-wetting phase. Standard stochastic theories based on the Ito calculus and the Wiener process and the equivalent Fokker-Planck PDE's are typically used to model dispersion
Approaching complexity by stochastic methods: From biological systems to turbulence
NASA Astrophysics Data System (ADS)
Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2011-09-01
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao
2007-01-01
A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen-Loève-based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen-Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two-dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.
Stochastic thermodynamics of reactive systems: An extended local equilibrium approach
NASA Astrophysics Data System (ADS)
De Decker, Yannick; Derivaux, Jean-François; Nicolis, Grégoire
2016-04-01
The recently developed extended local equilibrium approach to stochastic thermodynamics is applied to reactive systems. The properties of the fluctuating entropy and entropy production are analyzed for general linear and for prototypical nonlinear kinetic processes. It is shown that nonlinear kinetics typically induces deviations of the mean entropy production from its value in the deterministic (mean-field) limit. The probability distributions around the mean are derived and shown to qualitatively differ in thermodynamic equilibrium, under nonequilibrium conditions and in the vicinity of criticalities associated to the onset of multistability. In each case large deviation-type properties are shown to hold. The results are compared with those of alternative approaches developed in the literature.
Stochastic control approaches for sensor management in search and exploitation
NASA Astrophysics Data System (ADS)
Hitchings, Darin Chester
new lower bound on the performance of adaptive controllers in these scenarios, develop algorithms for computing solutions to this lower bound, and use these algorithms as part of a RH controller for sensor allocation in the presence of moving objects We also consider an adaptive Search problem where sensing actions are continuous and the underlying measurement space is also continuous. We extend our previous hierarchical decomposition approach based on performance bounds to this problem and develop novel implementations of Stochastic Dynamic Programming (SDP) techniques to solve this problem. Our algorithms are nearly two orders of magnitude faster than previously proposed approaches and yield solutions of comparable quality. For supervisory control, we discuss how human operators can work with and augment robotic teams performing these tasks. Our focus is on how tasks are partitioned among teams of robots and how a human operator can make intelligent decisions for task partitioning. We explore these questions through the design of a game that involves robot automata controlled by our algorithms and a human supervisor that partitions tasks based on different levels of support information. This game can be used with human subject experiments to explore the effect of information on quality of supervisory control.
Majorana approach to the stochastic theory of line shapes
NASA Astrophysics Data System (ADS)
Komijani, Yashar; Coleman, Piers
2016-08-01
Motivated by recent Mössbauer experiments on strongly correlated mixed-valence systems, we revisit the Kubo-Anderson stochastic theory of spectral line shapes. Using a Majorana representation for the nuclear spin we demonstrate how to recast the classic line-shape theory in a field-theoretic and diagrammatic language. We show that the leading contribution to the self-energy can reproduce most of the observed line-shape features including splitting and line-shape narrowing, while the vertex and the self-consistency corrections can be systematically included in the calculation. This approach permits us to predict the line shape produced by an arbitrary bulk charge fluctuation spectrum providing a model-independent way to extract the local charge fluctuation spectrum of the surrounding medium. We also derive an inverse formula to extract the charge fluctuation from the measured line shape.
ENISI SDE: A New Web-Based Tool for Modeling Stochastic Processes.
Mei, Yongguo; Carbo, Adria; Hoops, Stefan; Hontecillas, Raquel; Bassaganya-Riera, Josep
2015-01-01
Modeling and simulations approaches have been widely used in computational biology, mathematics, bioinformatics and engineering to represent complex existing knowledge and to effectively generate novel hypotheses. While deterministic modeling strategies are widely used in computational biology, stochastic modeling techniques are not as popular due to a lack of user-friendly tools. This paper presents ENISI SDE, a novel web-based modeling tool with stochastic differential equations. ENISI SDE provides user-friendly web user interfaces to facilitate adoption by immunologists and computational biologists. This work provides three major contributions: (1) discussion of SDE as a generic approach for stochastic modeling in computational biology; (2) development of ENISI SDE, a web-based user-friendly SDE modeling tool that highly resembles regular ODE-based modeling; (3) applying ENISI SDE modeling tool through a use case for studying stochastic sources of cell heterogeneity in the context of CD4+ T cell differentiation. The CD4+ T cell differential ODE model has been published [8] and can be downloaded from biomodels.net. The case study reproduces a biological phenomenon that is not captured by the previously published ODE model and shows the effectiveness of SDE as a stochastic modeling approach in biology in general and immunology in particular and the power of ENISI SDE. PMID:26357217
A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects
NASA Astrophysics Data System (ADS)
Samin, Adib J.
2016-05-01
In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.
Text Classification Using ESC-Based Stochastic Decision Lists.
ERIC Educational Resources Information Center
Li, Hang; Yamanishi, Kenji
2002-01-01
Proposes a new method of text classification using stochastic decision lists, ordered sequences of IF-THEN-ELSE rules. The method can be viewed as a rule-based method for text classification having advantages of readability and refinability of acquired knowledge. Advantages of rule-based methods over non-rule-based ones are empirically verified.…
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured
Zhijie Xu
2014-07-01
We present a new stochastic analysis for steady and transient one-dimensional heat conduction problem based on the homogenization approach. Thermal conductivity is assumed to be a random field K consisting of random variables of a total number N. Both steady and transient solutions T are expressed in terms of the homogenized solution (symbol) and its spatial derivatives (equation), where homogenized solution (symbol) is obtained by solving the homogenized equation with effective thermal conductivity. Both mean and variance of stochastic solutions can be obtained analytically for K field consisting of independent identically distributed (i.i.d) random variables. The mean and variance of T are shown to be dependent only on the mean and variance of these i.i.d variables, not the particular form of probability distribution function of i.i.d variables. Variance of temperature field T can be separated into two contributions: the ensemble contribution (through the homogenized temperature (symbol)); and the configurational contribution (through the random variable Ln(x)Ln(x)). The configurational contribution is shown to be proportional to the local gradient of (symbol). Large uncertainty of T field was found at locations with large gradient of (symbol) due to the significant configurational contributions at these locations. Numerical simulations were implemented based on a direct Monte Carlo method and good agreement is obtained between numerical Monte Carlo results and the proposed stochastic analysis.
Calculation of a double reactive azeotrope using stochastic optimization approaches
NASA Astrophysics Data System (ADS)
Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus
2013-02-01
An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.
Bogen, K T
2007-05-11
A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage
Stochastic Coloured Petrinet Based Healthcare Infrastructure Interdependency Model
NASA Astrophysics Data System (ADS)
Nukavarapu, Nivedita; Durbha, Surya
2016-06-01
The Healthcare Critical Infrastructure (HCI) protects all sectors of the society from hazards such as terrorism, infectious disease outbreaks, and natural disasters. HCI plays a significant role in response and recovery across all other sectors in the event of a natural or manmade disaster. However, for its continuity of operations and service delivery HCI is dependent on other interdependent Critical Infrastructures (CI) such as Communications, Electric Supply, Emergency Services, Transportation Systems, and Water Supply System. During a mass casualty due to disasters such as floods, a major challenge that arises for the HCI is to respond to the crisis in a timely manner in an uncertain and variable environment. To address this issue the HCI should be disaster prepared, by fully understanding the complexities and interdependencies that exist in a hospital, emergency department or emergency response event. Modelling and simulation of a disaster scenario with these complexities would help in training and providing an opportunity for all the stakeholders to work together in a coordinated response to a disaster. The paper would present interdependencies related to HCI based on Stochastic Coloured Petri Nets (SCPN) modelling and simulation approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The entire model would be integrated with Geographic information based decision support system to visualize the dynamic behaviour of the interdependency of the Healthcare and related CI network in a geographically based environment.
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments
NASA Astrophysics Data System (ADS)
Kang, Yuncheol; Prabhu, Vittal
The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.
Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.
NASA Astrophysics Data System (ADS)
Carlen, Eric Anders
In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.
A stochastic modeling methodology based on weighted Wiener chaos and Malliavin calculus
Wan, Xiaoliang; Rozovskii, Boris; Karniadakis, George Em
2009-01-01
In many stochastic partial differential equations (SPDEs) involving random coefficients, modeling the randomness by spatial white noise may lead to ill-posed problems. Here we consider an elliptic problem with spatial Gaussian coefficients and present a methodology that resolves this issue. It is based on stochastic convolution implemented via generalized Malliavin operators in conjunction with weighted Wiener spaces that ensure the ellipticity condition. We present theoretical and numerical results that demonstrate the fast convergence of the method in the proper norm. Our approach is general and can be extended to other SPDEs and other types of multiplicative noise. PMID:19666498
Bieda, Bogusław
2013-01-01
The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. PMID:23194922
NASA Astrophysics Data System (ADS)
Lajus, D. L.; Sukhotin, A. A.
1998-06-01
One of the most effective techniques for evaluating stress is the analysis of developmental stability, measured by stochastic variation based particularly on fluctuating asymmetry, i.e. a variance in random deviations from perfect bilateral symmetry. However, the application of morphological methods is only possible when an organism lives under testing conditions during a significant part of its ontogenesis. Contrary to morphological characters, behavior can change very fast. Consequently, methods based on behavioural characters may have advantages over more traditional approaches. In this study we describe the technique of assessing stochastic variation, using not morphological, but behavioural characters. To measure stochastic variation of behavioural response, we assessed the stability of the isolation reaction of blue mussel Mytilus edulis at regular changes of salinity. With increasing temperature from +12°C to +20°C stochastic variation of the isolation reaction increased, which is a common response to change of environmental conditions. In this way, we have developed a method of assessing stochastic variation of behavioural response in molluscs. This method may find a great range of applications, because its usage does not require keeping animals in tested conditions for a long time.
Revisiting the Cape Cod Bacteria Injection Experiment Using a Stochastic Modeling Approach
Maxwell, R M; Welty, C; Harvey, R W
2006-11-22
Bromide and resting-cell bacteria tracer tests carried out in a sand and gravel aquifer at the USGS Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach and Lagrangian particle tracking numerical methods. Bacteria transport was strongly coupled to colloid filtration through functional dependence of local-scale colloid transport parameters on hydraulic conductivity and seepage velocity in a stochastic advection-dispersion/attachment-detachment model. Information on geostatistical characterization of the hydraulic conductivity (K) field from a nearby plot was utilized as input that was unavailable when the original analysis was carried out. A finite difference model for groundwater flow and a particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data using the aforementioned geostatistical parameters. An optimization routine was utilized to adjust the mean and variance of the lnK field over 100 realizations such that a best fit of a simulated, average bromide breakthrough curve is achieved. Once the optimal bromide fit was accomplished (based on adjusting the lnK statistical parameters in unconditional simulations), a stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of the mean bacteria breakthrough data were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech [1] equation for estimating single collector efficiency were compared to those using the Rajagopalan and Tien [2] model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions, with the Rajagopalan and Tien model yielding approximately a 30% lower peak concentration and less tailing than the Tufenkji and Elimelech formulation. Simulations using a distribution
NASA Astrophysics Data System (ADS)
Miller, Michael I.; Roysam, Badrinath; Smith, Kurt R.
1988-10-01
Essential to the solution of ill posed problems in vision and image processing is the need to use object constraints in the reconstruction. While Bayesian methods have shown the greatest promise, a fundamental difficulty has persisted in that many of the available constraints are in the form of deterministic rules rather than as probability distributions and are thus not readily incorporated as Bayesian priors. In this paper, we propose a general method for mapping a large class of rule-based constraints to their equivalent stochastic Gibbs' distribution representation. This mapping allows us to solve stochastic estimation problems over rule-generated constraint spaces within a Bayesian framework. As part of this approach we derive a method based on Langevin's stochastic differential equation and a regularization technique based on the classical autologistic transfer function that allows us to update every site simultaneously regardless of the neighbourhood structure. This allows us to implement a completely parallel method for generating the constraint sets corresponding to the regular grammar languages on massively parallel networks. We illustrate these ideas by formulating the image reconstruction problem based on a hierarchy of rule-based and stochastic constraints, and derive a fully parallelestimator structure. We also present results computed on the AMT DAP500 massively parallel digital computer, a mesh-connected 32x32 array of processing elements which are configured in a Single-Instruction, Multiple Data stream architecture.
Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach
ERIC Educational Resources Information Center
Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro
2005-01-01
Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…
Evolving Stochastic Learning Algorithm based on Tsallis entropic index
NASA Astrophysics Data System (ADS)
Anastasiadis, A. D.; Magoulas, G. D.
2006-03-01
In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.
Wavelet-expansion-based stochastic response of chain-like MDOF structures
NASA Astrophysics Data System (ADS)
Kong, Fan; Li, Jie
2015-12-01
This paper presents a wavelet-expansion-based approach for response determination of a chain-like multi-degree-of-freedom (MDOF) structure subject to full non-stationary stochastic excitations. Specifically, the generalized harmonic wavelet (GHW) is first utilized as the expansion basis to solve the dynamic equation of structures via the Galerkin treatment. In this way, a linear matrix relationship between the deterministic response and excitation can be derived. Further, considering the GHW-based representation of the stochastic processes, a time-varying power spectrum density (PSD) relationship on a certain wavelet scale or frequency band between the excitation and response is derived. Finally, pertinent numerical simulations, including deterministic dynamic analysis and Monte Carlo simulations of both the response PSD and the story-drift-based reliability, are utilized to validate the proposed approach.
Heydari, M.H.; Hooshmandasl, M.R.; Maalek Ghaini, F.M.; Cattani, C.
2014-08-01
In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show the accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.
NASA Astrophysics Data System (ADS)
Pool, M.; Carrera, J.; Alcolea, A.; Bocanegra, E. M.
2015-12-01
Inversion of the spatial variability of transmissivity (T) in groundwater models can be handled using either stochastic or deterministic (i.e., geology-based zonation) approaches. While stochastic methods predominate in scientific literature, they have never been formally compared to deterministic approaches, preferred by practitioners, for regional aquifer models. We use both approaches to model groundwater flow and solute transport in the Mar del Plata aquifer, where seawater intrusion is a major threat to freshwater resources. The relative performance of the two approaches is evaluated in terms of (i) model fits to head and concentration data (available for nearly a century), (ii) geological plausibility of the estimated T fields, and (iii) their ability to predict transport. We also address the impact of conditioning the estimated fields on T data coming from either pumping tests interpreted with the Theis method or specific capacity values from step-drawdown tests. We find that stochastic models, based upon conditional estimation and simulation techniques, identify some of the geological features (river deposit channels and low transmissivity regions associated to quartzite outcrops) and yield better fits to calibration data than the much simpler geology-based deterministic model, which cannot properly address model structure uncertainty. However, the latter demonstrates much greater robustness for predicting sea water intrusion and for incorporating concentrations as calibration data. We attribute the poor performance, and underestimated uncertainty, of the stochastic simulations to estimation bias introduced by model errors. Qualitative geological information is extremely rich in identifying large-scale variability patterns, which are identified by stochastic models only in data rich areas, and should be explicitly included in the calibration process.
Exploring stochasticity and imprecise knowledge based on linear inequality constraints.
Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf
2016-09-01
This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge. PMID:26746217
Inversion of Robin coefficient by a spectral stochastic finite element approach
Jin Bangti Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. PMID:24996205
Robust synthetic biology design: stochastic game theory approach
Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching
2009-01-01
Motivation: Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. Results: A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi–Sugeno (T–S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. Availability: http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf Contact: bschen@ee.nthu.edu.tw Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19435742
A markov model based analysis of stochastic biochemical systems.
Ghosh, Preetam; Ghosh, Samik; Basu, Kalyan; Das, Sajial K
2007-01-01
The molecular networks regulating basic physiological processes in a cell are generally converted into rate equations assuming the number of biochemical molecules as deterministic variables. At steady state these rate equations gives a set of differential equations that are solved using numerical methods. However, the stochastic cellular environment motivates us to propose a mathematical framework for analyzing such biochemical molecular networks. The stochastic simulators that solve a system of differential equations includes this stochasticity in the model, but suffer from simulation stiffness and require huge computational overheads. This paper describes a new markov chain based model to simulate such complex biological systems with reduced computation and memory overheads. The central idea is to transform the continuous domain chemical master equation (CME) based method into a discrete domain of molecular states with corresponding state transition probabilities and times. Our methodology allows the basic optimization schemes devised for the CME and can also be extended to reduce the computational and memory overheads appreciably at the cost of accuracy. The simulation results for the standard Enzyme-Kinetics and Transcriptional Regulatory systems show promising correspondence with the CME based methods and point to the efficacy of our scheme. PMID:17951818
a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks
NASA Astrophysics Data System (ADS)
Bottacin-Busolin, A.; Worman, A. L.
2013-12-01
A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance
A stochastic approach to the hadron spectrum. III
Aron, J.C.
1986-12-01
The connection with the quarks of the stochastic model proposed in the two preceding papers is studied; the slopes of the baryon trajectories are calculated with reference to the quarks. Suggestions are made for the interpretation of the model (quadratic or linear addition of the contributions to the mass, dependence of the decay on the quantum numbers of the hadrons involved, etc.) and concerning its link with the quarkonium model, which describes the mesons with charm or beauty. The controversial question of the ''subquantum level'' is examined.
A fast and scalable recurrent neural network based on stochastic meta descent.
Liu, Zhenzhen; Elhanany, Itamar
2008-09-01
This brief presents an efficient and scalable online learning algorithm for recurrent neural networks (RNNs). The approach is based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated with either its input or output links. This yields a reduced storage and computational complexity of O(N(2)). Stochastic meta descent (SMD), an adaptive step size scheme for stochastic gradient-descent problems, is employed as means of incorporating curvature information in order to substantially accelerate the learning process. We also introduce a clustered version of our algorithm to further improve its scalability attributes. Despite the dramatic reduction in resource requirements, it is shown through simulation results that the approach outperforms regular RTRL by almost an order of magnitude. Moreover, the scheme lends itself to parallel hardware realization by virtue of the localized property that is inherent to the learning framework. PMID:18779096
Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations
NASA Astrophysics Data System (ADS)
Cartwright, Keigh
2014-10-01
To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Integral-based event triggering controller design for stochastic LTI systems via convex optimisation
NASA Astrophysics Data System (ADS)
Mousavi, S. H.; Marquez, H. J.
2016-07-01
The presence of measurement noise in the event-based systems can lower system efficiency both in terms of data exchange rate and performance. In this paper, an integral-based event triggering control system is proposed for LTI systems with stochastic measurement noise. We show that the new mechanism is robust against noise and effectively reduces the flow of communication between plant and controller, and also improves output performance. Using a Lyapunov approach, stability in the mean square sense is proved. A simulated example illustrates the properties of our approach.
Time-Frequency Approach for Stochastic Signal Detection
Ghosh, Ripul; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2011-10-20
The detection of events in a stochastic signal has been a subject of great interest. One of the oldest signal processing technique, Fourier Transform of a signal contains information regarding frequency content, but it cannot resolve the exact onset of changes in the frequency, all temporal information is contained in the phase of the transform. On the other hand, Spectrogram is better able to resolve temporal evolution of frequency content, but has a trade-off in time resolution versus frequency resolution in accordance with the uncertainty principle. Therefore, time-frequency representations are considered for energetic characterisation of the non-stationary signals. Wigner Ville Distribution (WVD) is the most prominent quadratic time-frequency signal representation and used for analysing frequency variations in signals.WVD allows for instantaneous frequency estimation at each data point, for a typical temporal resolution of fractions of a second. This paper through simulations describes the way time frequency models are applied for the detection of event in a stochastic signal.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
A stochastic control approach to Slotted-ALOHA random access protocol
NASA Astrophysics Data System (ADS)
Pietrabissa, Antonio
2013-12-01
ALOHA random access protocols are distributed protocols based on transmission probabilities, that is, each node decides upon packet transmissions according to a transmission probability value. In the literature, ALOHA protocols are analysed by giving necessary and sufficient conditions for the stability of the queues of the node buffers under a control vector (whose elements are the transmission probabilities assigned to the nodes), given an arrival rate vector (whose elements represent the rates of the packets arriving in the node buffers). The innovation of this work is that, given an arrival rate vector, it computes the optimal control vector by defining and solving a stochastic control problem aimed at maximising the overall transmission efficiency, while keeping a grade of fairness among the nodes. Furthermore, a more general case in which the arrival rate vector changes in time is considered. The increased efficiency of the proposed solution with respect to the standard ALOHA approach is evaluated by means of numerical simulations.
NASA Astrophysics Data System (ADS)
Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.
2011-12-01
A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.
Stochastic queueing-theory approach to human dynamics
NASA Astrophysics Data System (ADS)
Walraevens, Joris; Demoor, Thomas; Maertens, Tom; Bruneel, Herwig
2012-02-01
Recently, numerous studies have shown that human dynamics cannot be described accurately by exponential laws. For instance, Barabási [Nature (London)NATUAS0028-083610.1038/nature03459 435, 207 (2005)] demonstrates that waiting times of tasks to be performed by a human are more suitably modeled by power laws. He presumes that these power laws are caused by a priority selection mechanism among the tasks. Priority models are well-developed in queueing theory (e.g., for telecommunication applications), and this paper demonstrates the (quasi-)immediate applicability of such a stochastic priority model to human dynamics. By calculating generating functions and by studying them in their dominant singularity, we prove that nonexponential tails result naturally. Contrary to popular belief, however, these are not necessarily triggered by the priority selection mechanism.
Stochastic Modeling Approach to the Incubation Time of Prionic Diseases
NASA Astrophysics Data System (ADS)
Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.
2003-05-01
Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.
Two-state approach to stochastic hair bundle dynamics
NASA Astrophysics Data System (ADS)
Clausznitzer, Diana; Lindner, Benjamin; Jülicher, Frank; Martin, Pascal
2008-04-01
Hair cells perform the mechanoelectrical transduction of sound signals in the auditory and vestibular systems of vertebrates. The part of the hair cell essential for this transduction is the so-called hair bundle. In vitro experiments on hair cells from the sacculus of the American bullfrog have shown that the hair bundle comprises active elements capable of producing periodic deflections like a relaxation oscillator. Recently, a continuous nonlinear stochastic model of the hair bundle motion [Nadrowski , Proc. Natl. Acad. Sci. U.S.A. 101, 12195 (2004)] has been shown to reproduce the experimental data in stochastic simulations faithfully. Here, we demonstrate that a binary filtering of the hair bundle's deflection (experimental data and continuous hair bundle model) does not change significantly the spectral statistics of the spontaneous as well as the periodically driven hair bundle motion. We map the continuous hair bundle model to the FitzHugh-Nagumo model of neural excitability and discuss the bifurcations between different regimes of the system in terms of the latter model. Linearizing the nullclines and assuming perfect time-scale separation between the variables we can map the FitzHugh-Nagumo system to a simple two-state model in which each of the states corresponds to the two possible values of the binary-filtered hair bundle trajectory. For the two-state model, analytical expressions for the power spectrum and the susceptibility can be calculated [Lindner and Schimansky-Geier, Phys. Rev. E 61, 6103 (2000)] and show the same features as seen in the experimental data as well as in simulations of the continuous hair bundle model.
Zunino, L; Soriano, M C; Rosso, O A
2012-10-01
In this paper we introduce a multiscale symbolic information-theory approach for discriminating nonlinear deterministic and stochastic dynamics from time series associated with complex systems. More precisely, we show that the multiscale complexity-entropy causality plane is a useful representation space to identify the range of scales at which deterministic or noisy behaviors dominate the system's dynamics. Numerical simulations obtained from the well-known and widely used Mackey-Glass oscillator operating in a high-dimensional chaotic regime were used as test beds. The effect of an increased amount of observational white noise was carefully examined. The results obtained were contrasted with those derived from correlated stochastic processes and continuous stochastic limit cycles. Finally, several experimental and natural time series were analyzed in order to show the applicability of this scale-dependent symbolic approach in practical situations. PMID:23214666
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays.
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2014-10-01
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models. PMID:25296793
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2014-10-07
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.
Wavelet-Variance-Based Estimation for Composite Stochastic Processes.
Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia
2013-09-01
This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689
Wavelet-Variance-Based Estimation for Composite Stochastic Processes
Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia
2013-01-01
This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689
Stochastic multiscale modelling of cortical bone elasticity based on high-resolution imaging.
Sansalone, Vittorio; Gagliardi, Davide; Desceliers, Christophe; Bousson, Valérie; Laredo, Jean-Denis; Peyrin, Françoise; Haïat, Guillaume; Naili, Salah
2016-02-01
Accurate and reliable assessment of bone quality requires predictive methods which could probe bone microstructure and provide information on bone mechanical properties. Multiscale modelling and simulation represent a fast and powerful way to predict bone mechanical properties based on experimental information on bone microstructure as obtained through X-ray-based methods. However, technical limitations of experimental devices used to inspect bone microstructure may produce blurry data, especially in in vivo conditions. Uncertainties affecting the experimental data (input) may question the reliability of the results predicted by the model (output). Since input data are uncertain, deterministic approaches are limited and new modelling paradigms are required. In this paper, a novel stochastic multiscale model is developed to estimate the elastic properties of bone while taking into account uncertainties on bone composition. Effective elastic properties of cortical bone tissue were computed using a multiscale model based on continuum micromechanics. Volume fractions of bone components (collagen, mineral, and water) were considered as random variables whose probabilistic description was built using the maximum entropy principle. The relevance of this approach was proved by analysing a human bone sample taken from the inferior femoral neck. The sample was imaged using synchrotron radiation micro-computed tomography. 3-D distributions of Haversian porosity and tissue mineral density extracted from these images supplied the experimental information needed to build the stochastic models of the volume fractions. Thus, the stochastic multiscale model provided reliable statistical information (such as mean values and confidence intervals) on bone elastic properties at the tissue scale. Moreover, the existence of a simpler "nominal model", accounting for the main features of the stochastic model, was investigated. It was shown that such a model does exist, and its relevance
Broadband seismic monitoring of active volcanoes using deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Kumagai, H.; Nakano, M.; Maeda, T.; Yepes, H.; Palacios, P.; Ruiz, M. C.; Arrais, S.; Vaca, M.; Molina, I.; Yamashina, T.
2009-12-01
We systematically used two approaches to analyze broadband seismic signals observed at active volcanoes: one is waveform inversion of very-long-period (VLP) signals in the frequency domain assuming possible source mechanisms; the other is a source location method of long-period (LP) and tremor using their amplitudes. The deterministic approach of the waveform inversion is useful to constrain the source mechanism and location, but is basically only applicable to VLP signals with periods longer than a few seconds. The source location method uses seismic amplitudes corrected for site amplifications and assumes isotropic radiation of S waves. This assumption of isotropic radiation is apparently inconsistent with the hypothesis of crack geometry at the LP source. Using the source location method, we estimated the best-fit source location of a VLP/LP event at Cotopaxi using a frequency band of 7-12 Hz and Q = 60. This location was close to the best-fit source location determined by waveform inversion of the VLP/LP event using a VLP band of 5-12.5 s. The waveform inversion indicated that a crack mechanism better explained the VLP signals than an isotropic mechanism. These results indicated that isotropic radiation is not inherent to the source and only appears at high frequencies. We also obtained a best-fit location of an explosion event at Tungurahua when using a frequency band of 5-10 Hz and Q = 60. This frequency band and Q value also yielded reasonable locations for the sources of tremor signals associated with lahars and pyroclastic flows at Tungurahua. The isotropic radiation assumption may be valid in a high frequency range in which the path effect caused by the scattering of seismic waves results in an isotropic radiation pattern of S waves. The source location method may be categorized as a stochastic approach based on the nature of scattering waves. We further applied the waveform inversion to VLP signals observed at only two stations during a volcanic crisis
A stochastic process approach of the drake equation parameters
NASA Astrophysics Data System (ADS)
Glade, Nicolas; Ballet, Pascal; Bastien, Olivier
2012-04-01
The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.
A stochastic optimization approach for integrated urban water resource planning.
Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X
2013-01-01
Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change. PMID:23552255
A Stochastic Approach to Noise Modeling for Barometric Altimeters
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-01-01
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions. PMID:24253189
Coalescence avalanches in 2D emulsions: a stochastic approach
NASA Astrophysics Data System (ADS)
Masila, Danny Raj; Rengaswamy, Raghunathan
2015-11-01
One coalescence event in a 2D concentrated emulsion can trigger an avalanche resulting in the rapid destabilization of the drop-assembly. The sensitive dependence of this phenomenon on various factors that include surfactant concentration and viscosities of the fluid phases makes the avalanching problem appear probabilistic. We propose a stochastic framework- that utilizes a probability function to explain local coalescence events- to study the dynamics of the coalescence avalanches. A function that accounts for the local coalescence mechanism is used to fit the experimentally (from literature) measured probability data. A continuation parameter is introduced along with this function to account for the effect of system properties on the avalanche dynamics. Our analysis reveals that this behavior is a result of the inherent autocatalytic nature of the process. We discover that the avalanche dynamics shows critical behavior where two outcomes are favored: no avalanche and large avalanches that lead to destabilization. We study the effect of system size and fluid properties on the avalanche dynamics. A sharp transition from non-autocatalytic (stable emulsions) to autocatalytic (unstable) behavior is observed as parameters are varied.
Stochastically optimized monocular vision-based navigation and guidance
NASA Astrophysics Data System (ADS)
Watanabe, Yoko
-effort guidance (MEG) law for multiple target tracking is applied for a guidance design to achieve the mission. Through simulations, it is shown that the control effort can be reduced by using the MEG-based guidance design instead of a conventional proportional navigation-based one. The navigation and guidance designs are implemented and evaluated in a 6 DoF UAV flight simulation. Furthermore, the vision-based obstacle avoidance system is also tested in a flight test using a balloon as an obstacle. For monocular vision-based control problems, it is well-known that the separation principle between estimation and control does not hold. In other words, that vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Unlike many other works on observer trajectory optimization, this thesis suggests a stochastically optimized guidance design that minimizes the expected value of a cost function of the guidance error and the control effort subject to the EKF prediction and update procedures. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization, in which the optimization is performed under the assumption that there will be only one more final measurement at the one time step ahead. The OSA suboptimal guidance law is applied to problems of vision-based rendezvous and vision-based obstacle avoidance. Simulation results are presented to show that the suggested guidance law significantly improves the guidance performance. The OSA suboptimal optimization approach is generalized as the n-step-ahead (nSA) optimization for an arbitrary number of n. Furthermore, the nSA suboptimal guidance law is extended to the p %-ahead suboptimal guidance by changing the value of n at each time step depending on the current time. The nSA (including the OSA) and
NASA Astrophysics Data System (ADS)
Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra
2004-08-01
In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.
Zhang, Jinjing; Zhang, Tao
2015-02-15
The parameter-induced stochastic resonance based on spectral entropy (PSRSE) method is introduced for the detection of a very weak signal in the presence of strong noise. The effect of stochastic resonance on the detection is optimized using parameters obtained in spectral entropy analysis. Upon processing employing the PSRSE method, the amplitude of the weak signal is enhanced and the noise power is reduced, so that the frequency of the signal can be estimated with greater precision through spectral analysis. While the improvement in the signal-to-noise ratio is similar to that obtained using the Duffing oscillator algorithm, the computational cost reduces from O(N{sup 2}) to O(N). The PSRSE approach is applied to the frequency measurement of a weak signal made by a vortex flow meter. The results are compared with those obtained applying the Duffing oscillator algorithm.
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
NASA Astrophysics Data System (ADS)
Guiaş, Flavius
2009-01-01
We present a stochastic approach for the simulation of coagulation-diffusion dynamics in the gelation regime. The method couples the mass flow algorithm for coagulation processes with a stochastic variant of the diffusion-velocity method in a discretized framework. The simulation of the stochastic processes occurs according to an optimized implementation of the principle of grouping the possible events. A full simulation of a particle system driven by coagulation-diffusion dynamics is performed with a high degree of accuracy. This allows a qualitative and quantitative analysis of the behaviour of the system. The performance of the method becomes more evident especially in the gelation regime, where the computations become usually very time consuming.
Image-based histologic grade estimation using stochastic geometry analysis
NASA Astrophysics Data System (ADS)
Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.
2011-03-01
Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.
Reliability-based design optimization under stationary stochastic process loads
NASA Astrophysics Data System (ADS)
Hu, Zhen; Du, Xiaoping
2016-08-01
Time-dependent reliability-based design ensures the satisfaction of reliability requirements for a given period of time, but with a high computational cost. This work improves the computational efficiency by extending the sequential optimization and reliability analysis (SORA) method to time-dependent problems with both stationary stochastic process loads and random variables. The challenge of the extension is the identification of the most probable point (MPP) associated with time-dependent reliability targets. Since a direct relationship between the MPP and reliability target does not exist, this work defines the concept of equivalent MPP, which is identified by the extreme value analysis and the inverse saddlepoint approximation. With the equivalent MPP, the time-dependent reliability-based design optimization is decomposed into two decoupled loops: deterministic design optimization and reliability analysis, and both are performed sequentially. Two numerical examples are used to show the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Chang, Ching-Min; Yeh, Hund-Der
2009-01-01
This paper describes a stochastic analysis of steady state flow in a bounded, partially saturated heterogeneous porous medium subject to distributed infiltration. The presence of boundary conditions leads to non-uniformity in the mean unsaturated flow, which in turn causes non-stationarity in the statistics of velocity fields. Motivated by this, our aim is to investigate the impact of boundary conditions on the behavior of field-scale unsaturated flow. Within the framework of spectral theory based on Fourier-Stieltjes representations for the perturbed quantities, the general expressions for the pressure head variance, variance of log unsaturated hydraulic conductivity and variance of the specific discharge are presented in the wave number domain. Closed-form expressions are developed for the simplified case of statistical isotropy of the log hydraulic conductivity field with a constant soil pore-size distribution parameter. These expressions allow us to investigate the impact of the boundary conditions, namely the vertical infiltration from the soil surface and a prescribed pressure head at a certain depth below the soil surface. It is found that the boundary conditions are critical in predicting uncertainty in bounded unsaturated flow. Our analytical expression for the pressure head variance in a one-dimensional, heterogeneous flow domain, developed using a nonstationary spectral representation approach [Li S-G, McLaughlin D. A nonstationary spectral method for solving stochastic groundwater problems: unconditional analysis. Water Resour Res 1991;27(7):1589-605; Li S-G, McLaughlin D. Using the nonstationary spectral method to analyze flow through heterogeneous trending media. Water Resour Res 1995; 31(3):541-51], is precisely equivalent to the published result of Lu et al. [Lu Z, Zhang D. Analytical solutions to steady state unsaturated flow in layered, randomly heterogeneous soils via Kirchhoff transformation. Adv Water Resour 2004;27:775-84].
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee
2012-01-01
Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The
A stochastic approach to uncertainty quantification in residual moveout analysis
NASA Astrophysics Data System (ADS)
Johng-Ay, T.; Landa, E.; Dossou-Gbété, S.; Bordes, L.
2015-06-01
Oil and gas exploration and production relies usually on the interpretation of a single seismic image, which is obtained from observed data. However, the statistical nature of seismic data and the various approximations and assumptions are sources of uncertainties which may corrupt the evaluation of parameters. The quantification of these uncertainties is a major issue which supposes to help in decisions that have important social and commercial implications. The residual moveout analysis, which is an important step in seismic data processing is usually performed by a deterministic approach. In this paper we discuss a Bayesian approach to the uncertainty analysis.
Ultrafast dynamics of finite Hubbard clusters: A stochastic mean-field approach
NASA Astrophysics Data System (ADS)
Lacroix, Denis; Hermanns, S.; Hinz, C. M.; Bonitz, M.
2014-09-01
Finite lattice models are a prototype for interacting quantum systems and capture essential properties of condensed matter systems. With the dramatic progress in ultracold atoms in optical lattices, finite fermionic Hubbard systems have become directly accessible in experiments, including their ultrafast dynamics far from equilibrium. Here, we present a theoretical approach that is able to treat these dynamics in any dimension and fully includes inhomogeneity effects. The method consists in stochastic sampling of mean-field trajectories and is—for not too large two-body interaction strength—found to be much more accurate than time-dependent mean-field at the same order of numerical costs. Furthermore, it can well compete with recent nonequilibrium Green function approaches using second-order Born approximation, which are of substantially larger complexity. The performance of the stochastic mean-field approach is demonstrated for Hubbard clusters with up to 512 particles in one, two, and three dimensions.
A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments
Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro
2009-01-01
Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided. PMID:22412334
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
On the Performance of Stochastic Model-Based Image Segmentation
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Sewchand, Wilfred
1989-11-01
A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].
A constrained approach to multiscale stochastic simulation of chemically reacting systems
NASA Astrophysics Data System (ADS)
Cotter, Simon L.; Zygalakis, Konstantinos C.; Kevrekidis, Ioannis G.; Erban, Radek
2011-09-01
Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems.
Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations
NASA Astrophysics Data System (ADS)
Junaid, Ali Khan; Muhammad, Asif Zahoor Raja; Ijaz Mansoor, Qureshi
2011-02-01
We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.
A Stochastic Approach For Extending The Dimensionality Of Observed Datasets
NASA Technical Reports Server (NTRS)
Varnai, Tamas
2002-01-01
This paper addresses the problem that in many cases, observations cannot provide complete fields of the measured quantities, because they yield data only along a single cross-section through the examined fields. The paper describes a new Fourier-adjustment technique that allows existing fractal models to build realistic surroundings to the measured cross-sections. This new approach allows more representative calculations of cloud radiative processes and may be used in other areas as well.
A stochastic model is described that allows transfer of information from general circulation models to precipitation gauge locations using a weather state classification scheme. he weather states, which are based on present and previous day's sea level pressure, are related stoch...
Scott, Bobby, R., Ph.D.
2003-06-27
OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on
A wavelet-based computational method for solving stochastic Itô–Volterra integral equations
Mohammadi, Fakhrodin
2015-10-01
This paper presents a computational method based on the Chebyshev wavelets for solving stochastic Itô–Volterra integral equations. First, a stochastic operational matrix for the Chebyshev wavelets is presented and a general procedure for forming this matrix is given. Then, the Chebyshev wavelets basis along with this stochastic operational matrix are applied for solving stochastic Itô–Volterra integral equations. Convergence and error analysis of the Chebyshev wavelets basis are investigated. To reveal the accuracy and efficiency of the proposed method some numerical examples are included.
Pacini, Simone
2014-01-01
Mesenchymal stromal cells (MSCs) have enormous intrinsic clinical value due to their multi-lineage differentiation capacity, support of hemopoiesis, immunoregulation and growth factors/cytokines secretion. MSCs have thus been the object of extensive research for decades. After completion of many pre-clinical and clinical trials, MSC-based therapy is now facing a challenging phase. Several clinical trials have reported moderate, non-durable benefits, which caused initial enthusiasm to wane, and indicated an urgent need to optimize the efficacy of therapeutic, platform-enhancing MSC-based treatment. Recent investigations suggest the presence of multiple in vivo MSC ancestors in a wide range of tissues, which contribute to the heterogeneity of the starting material for the expansion of MSCs. This variability in the MSC culture-initiating cell population, together with the different types of enrichment/isolation and cultivation protocols applied, are hampering progress in the definition of MSC-based therapies. International regulatory statements require a precise risk/benefit analysis, ensuring the safety and efficacy of treatments. GMP validation allows for quality certification, but the prediction of a clinical outcome after MSC-based therapy is correlated not only to the possible morbidity derived by cell production process, but also to the biology of the MSCs themselves, which is highly sensible to unpredictable fluctuation of isolating and culture conditions. Risk exposure and efficacy of MSC-based therapies should be evaluated by pre-clinical studies, but the batch-to-batch variability of the final medicinal product could significantly limit the predictability of these studies. The future success of MSC-based therapies could lie not only in rational optimization of therapeutic strategies, but also in a stochastic approach during the assessment of benefit and risk factors. PMID:25364757
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions. PMID:19268387
NASA Technical Reports Server (NTRS)
Lua, Yuan J.; Liu, Wing K.; Belytschko, Ted
1992-01-01
A stochastic damage model for predicting the rupture of a brittle multiphase material is developed, based on the microcrack-macrocrack interaction. The model, which incorporates uncertainties in locations, orientations, and numbers of microcracks, characterizes damage by microcracking and fracture by macrocracking. A parametric study is carried out to investigate the change of the stress intensity at the macrocrack tip by the configuration of microcracks. The inherent statistical distribution of the fracture toughness arising from the intrinsic random nature of microcracks is explored using a statistical approach. For this purpose, a computer simulation model is introduced, which incorporates a statistical characterization of geometrical parameters of a random microcrack array.
Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.
Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A
2007-12-01
By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises. PMID:18233821
Figueredo, Grazziela P; Siebers, Peer-Olaf; Owen, Markus R; Reps, Jenna; Aickelin, Uwe
2014-01-01
There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm. PMID:24752131
On a stochastic approach to a code performance estimation
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.
2016-06-01
The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.
Runoff modelling using radar data and flow measurements in a stochastic state space approach.
Krämer, S; Grum, M; Verworn, H R; Redder, A
2005-01-01
In urban drainage the estimation of runoff with the help of models is a complex task. This is in part due to the fact that rainfall, the most important input to urban drainage modelling, is highly uncertain. Added to the uncertainty of rainfall is the complexity of performing accurate flow measurements. In terms of deterministic modelling techniques these are needed for calibration and evaluation of the applied model. Therefore, the uncertainties of rainfall and flow measurements have a severe impact on the model parameters and results. To overcome these problems a new methodology has been developed which is based on simple rain plane and runoff models that are incorporated into a stochastic state space model approach. The state estimation is done by using the extended Kalman filter in combination with a maximum likelihood criterion and an off-line optimization routine. This paper presents the results of this new methodology with respect to the combined consideration of uncertainties in distributed rainfall derived from radar data and uncertainties in measured flows in an urban catchment within the Emscher river basin, Germany. PMID:16248174
Robustness and security assessment of image watermarking techniques by a stochastic approach
NASA Astrophysics Data System (ADS)
Conotter, V.; Boato, G.; Fontanari, C.; De Natale, F. G. B.
2009-02-01
In this paper we propose to evaluate both robustness and security of digital image watermarking techniques by considering the perceptual quality of un-marked images in terms of Weightened PSNR. The proposed tool is based on genetic algorithms and is suitable for researchers to evaluate robustness performances of developed watermarking methods. Given a combination of selected attacks, the proposed framework looks for a fine parameterization of them ensuring a perceptual quality of the un-marked image lower than a given threshold. Correspondingly, a novel metric for robustness assessment is introduced. On the other hand, this tool results to be useful also in those scenarios where an attacker tries to remove the watermark to overcome copyright issues. Security assessment is provided by a stochastic search of the minimum degradation that needs to be introduced in order to obtain an un-marked version of the image as close as possible to the given one. Experimental results show the effectiveness of the proposed approach.
Kolevatov, R. S.; Boreskov, K. G.
2013-04-15
We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.
Relative frequencies of constrained events in stochastic processes: An analytical approach
NASA Astrophysics Data System (ADS)
Rusconi, S.; Akhmatskaya, E.; Sokolovski, D.; Ballard, N.; de la Cal, J. C.
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈104 ). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.
Relative frequencies of constrained events in stochastic processes: An analytical approach.
Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications. PMID:26565363
Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches
NASA Astrophysics Data System (ADS)
Egging, Rudolf Gerardus
This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in
Stochastic path integral approach to continuous quadrature measurement of a single fluorescing qubit
NASA Astrophysics Data System (ADS)
Jordan, Andrew N.; Chantasri, Areeya; Huard, Benjamin
I will present a theory of continuous quantum measurement for a superconducting qubit undergoing fluorescent energy relaxation. The fluorescence of the qubit is detected via a phase-preserving heterodyne measurement, giving the cavity mode quadrature signals as two continuous qubit readout results. By using the stochastic path integral approach to the measurement physics, we obtain the most likely fluorescence paths between chosen boundary conditions on the state, and compute approximate correlation functions between all stochastic variables via diagrammatic perturbation theory. Of particular interest are most-likely paths describing increasing energy during the florescence. Comparison to Monte Carlo numerical simulation and experiment will be discussed. This work was supported by US Army Research Office Grants No. W911NF-09-0-01417 and No. W911NF-15-1-0496, by NSF Grant DMR-1506081, by John Templeton Foundation Grant ID 58558, and by the DPSTT Project Thailand.
NASA Astrophysics Data System (ADS)
Maiti, Sumit Kumar; Roy, Sankar Kumar
2016-05-01
In this paper, a Multi-Choice Stochastic Bi-Level Programming Problem (MCSBLPP) is considered where all the parameters of constraints are followed by normal distribution. The cost coefficients of the objective functions are multi-choice types. At first, all the probabilistic constraints are transformed into deterministic constraints using stochastic programming approach. Further, a general transformation technique with the help of binary variables is used to transform the multi-choice type cost coefficients of the objective functions of Decision Makers(DMs). Then the transformed problem is considered as a deterministic multi-choice bi-level programming problem. Finally, a numerical example is presented to illustrate the usefulness of the paper.
Non-perturbative approach for curvature perturbations in stochastic δ N formalism
Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro E-mail: kawasaki@icrr.u-tokyo.ac.jp
2014-10-01
In our previous paper [1], we have proposed a new algorithm to calculate the power spectrum of the curvature perturbations generated in inflationary universe with use of the stochastic approach. Since this algorithm does not need the perturbative expansion with respect to the inflaton fields on super-horizon scale, it works even in highly stochastic cases. For example, when the curvature perturbations are very large or the non-Gaussianities of the curvature perturbations are sizable, the perturbative expansion may break down but our algorithm enables to calculate the curvature perturbations. We apply it to two well-known inflation models, chaotic and hybrid inflation, in this paper. Especially for hybrid inflation, while the potential is very flat around the critical point and the standard perturbative computation is problematic, we successfully calculate the curvature perturbations.
Stochastic Inversion of Electrical Resistivity Changes Using a Markov Chain, Monte Carlo Approach
Ramirez, A; Nitao, J; Hanley, W; Aines, R; Glaser, R; Sengupta, S; Dyer, K; Hickling, T; Daily, W
2004-09-21
We describe a stochastic inversion method for mapping subsurface regions where the electrical resistivity is changing. The technique combines prior information, electrical resistance data and forward models to produce subsurface resistivity models that are most consistent with all available data. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. Attractive features include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate and, (2) allow alternative model estimates to be identified, compared and ranked. Methods that monitor convergence and summarize important trends of the posterior distribution are introduced. Results from a physical model test and a field experiment were used to assess performance. The stochastic inversions presented provide useful estimates of the most probable location, shape, and volume of the changing region, and the most likely resistivity change. The proposed method is computationally expensive, requiring the use of extensive computational resources to make its application practical.
Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.
2009-01-01
Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.
Stochastic EM-based TFBS motif discovery with MITSU
Kilpatrick, Alastair M.; Ward, Bruce; Aitken, Stuart
2014-01-01
Motivation: The Expectation–Maximization (EM) algorithm has been successfully applied to the problem of transcription factor binding site (TFBS) motif discovery and underlies the most widely used motif discovery algorithms. In the wider field of probabilistic modelling, the stochastic EM (sEM) algorithm has been used to overcome some of the limitations of the EM algorithm; however, the application of sEM to motif discovery has not been fully explored. Results: We present MITSU (Motif discovery by ITerative Sampling and Updating), a novel algorithm for motif discovery, which combines sEM with an improved approximation to the likelihood function, which is unconstrained with regard to the distribution of motif occurrences within the input dataset. The algorithm is evaluated quantitatively on realistic synthetic data and several collections of characterized prokaryotic TFBS motifs and shown to outperform EM and an alternative sEM-based algorithm, particularly in terms of site-level positive predictive value. Availability and implementation: Java executable available for download at http://www.sourceforge.net/p/mitsu-motif/, supported on Linux/OS X. Contact: a.m.kilpatrick@sms.ed.ac.uk PMID:24931999
NASA Astrophysics Data System (ADS)
Subagadis, Y. H.; Schütze, N.; Grundmann, J.
2014-09-01
The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
NASA Astrophysics Data System (ADS)
Ohkubo, Jun
2015-10-01
An alternative application of duality relations of stochastic processes is demonstrated. Although conventional usages of the duality relations need analytical solutions for the dual processes, here I employ numerical solutions of the dual processes and investigate the usefulness. As a demonstration, estimation problems of hidden variables in stochastic differential equations are discussed. Employing algebraic probability theory, a little complicated birth-death process is derived from the stochastic differential equations, and an estimation method based on the ensemble Kalman filter is proposed. As a result, the possibility for making faster computational algorithms based on the duality concepts is shown.
Ohkubo, Jun
2015-10-01
An alternative application of duality relations of stochastic processes is demonstrated. Although conventional usages of the duality relations need analytical solutions for the dual processes, here I employ numerical solutions of the dual processes and investigate the usefulness. As a demonstration, estimation problems of hidden variables in stochastic differential equations are discussed. Employing algebraic probability theory, a little complicated birth-death process is derived from the stochastic differential equations, and an estimation method based on the ensemble Kalman filter is proposed. As a result, the possibility for making faster computational algorithms based on the duality concepts is shown. PMID:26565359
A new approach to the assessment of stochastic errors of radio source position catalogues
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-10-01
Assessing the external stochastic errors of radio source position catalogues derived from VLBI observations is important for tasks such as estimating the quality of the catalogues and their weighting during combination. One of the widely used methods to estimate these errors is the three-cornered-hat technique, which can be extended to the N-cornered-hat technique. A critical point of this method is how to properly account for the correlations between the compared catalogues. We present a new approach to solving this problem that is suitable for simultaneous investigations of several catalogues. To compute the correlation between two catalogues A and B, the differences between these catalogues and a third arbitrary catalogue C are computed. Then the correlation between these differences is considered as an estimate of the correlation between catalogues A and B. The average value of these estimates over all catalogues C is taken as a final estimate of the target correlation. In this way, an exhaustive search of all possible combinations allows one to compute the paired correlations between all catalogues. As an additional refinement of the method, we introduce the concept of weighted correlation coefficient. This technique was applied to nine recently published radio source position catalogues. We found large systematic differences between catalogues, that significantly impact determination of their stochastic errors. Finally, we estimated the stochastic errors of the nine catalogues.
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
Karagiannis, Georgios Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
A probabilistic graphical model approach to stochastic multiscale partial differential equations
Wan, Jiang; Zabaras, Nicholas; Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853
2013-10-01
We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Economic policy optimization based on both one stochastic model and the parametric control theory
NASA Astrophysics Data System (ADS)
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
NASA Astrophysics Data System (ADS)
Foster, T.; Butler, A. P.; McIntyre, N.
2012-12-01
Increasing water demands from growing populations coupled with changing water availability, for example due to climate change, are likely to increase water scarcity. Agriculture will be exposed to risk due to the importance of reliable water supplies as an input to crop production. To assess the efficiency of agricultural adaptation options requires a sound understanding of the relationship between crop growth and water application. However, most water resource planning models quantify agricultural water demand using highly simplified, temporally lumped estimated crop-water production functions (CWPFs). Such CWPFs fail to capture the biophysical complexities in crop-water relations and mischaracterise farmers ability to respond to water scarcity. Application of these models in policy analyses will be ineffective and may lead to unsustainable water policies. Crop simulation models provide an alternative means of defining the complex nature of the CWPF. Here we develop a daily water-limited crop model for this purpose. The model is based on the approach used in the FAO's AquaCrop model, balancing biophysical and computational complexities. We further develop the model by incorporating improved simulation routines to calculate the distribution of water through the soil profile. Consequently we obtain a more realistic representation of the soil water balance with concurrent improvements in the prediction of water-limited yield. We introduce a methodology to utilise this model for the generation of stochastic crop-water production functions (SCWPFs). This is achieved by running the model iteratively with both time series of climatic data and variable quantities of irrigation water, employing a realistic rule-based approach to farm irrigation scheduling. This methodology improves the representation of potential crop yields, capturing both the variable effects of water deficits on crop yield and the stochastic nature of the CWPF due to climatic variability. Application to
Atzori, A S; Tedeschi, L O; Cannas, A
2013-05-01
The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21
NASA Astrophysics Data System (ADS)
Strauss, R. D.; Potgieter, M. S.; Boezio, M.; de Simone, N.; di Felice, V.; Kopp, A.; Büsching, I.
2012-08-01
Using a newly developed 5D comic ray modulation model, we study the modulation of galactic protons and anti-protons inside the heliosphere. This is done for different heliospheric magnetic field polarity cycles, which, in combination with drifts, lead to charge-sign dependent cosmic ray transport. Computed energy spectra and intensity ratios for the different cosmic ray populations are shown and discussed. Modelling results are extensively compared to recent observations made by the PAMELA space borne particle detector. Using a stochastic transport approach, we also show pseudo-particle traces, illustrating the principle behind charge-sign dependent modulation.
Effects of Extrinsic Mortality on the Evolution of Aging: A Stochastic Modeling Approach
Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam
2014-01-01
The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non
Acceleration of stochastic seismic inversion in OpenCL-based heterogeneous platforms
NASA Astrophysics Data System (ADS)
Ferreirinha, Tomás; Nunes, Rúben; Azevedo, Leonardo; Soares, Amílcar; Pratas, Frederico; Tomás, Pedro; Roma, Nuno
2015-05-01
Seismic inversion is an established approach to model the geophysical characteristics of oil and gas reservoirs, being one of the basis of the decision making process in the oil&gas exploration industry. However, the required accuracy levels can only be attained by dealing and processing significant amounts of data, often leading to consequently long execution times. To overcome this issue and to allow the development of larger and higher resolution elastic models of the subsurface, a novel parallelization approach is herein proposed targeting the exploitation of GPU-based heterogeneous systems based on a unified OpenCL programming framework, to accelerate a state of art Stochastic Seismic Amplitude versus Offset Inversion algorithm. To increase the parallelization opportunities while ensuring model fidelity, the proposed approach is based on a careful and selective relaxation of some spatial dependencies. Furthermore, to take into consideration the heterogeneity of modern computing systems, usually composed of several and different accelerating devices, multi-device parallelization strategies are also proposed. When executed in a dual-GPU system, the proposed approach allows reducing the execution time in up to 30 times, without compromising the quality of the obtained models.
Kryvohuz, Maksym; Mukamel, Shaul
2015-06-01
Generalized nonlinear response theory is presented for stochastic dynamical systems. Experiments in which multiple measurements of dynamical quantities are used along with multiple perturbations of parameters of dynamical systems are described by generalized response functions (GRFs). These constitute a new type of multidimensional measures of stochastic dynamics either in the time or the frequency domains. Closed expressions for GRFs in stochastic dynamical systems are derived and compared with numerical non-equilibrium simulations. Several types of perturbations are considered: impulsive and periodic perturbations of temperature and impulsive perturbations of coordinates. The present approach can be used to study various types of stochastic processes ranging from single-molecule conformational dynamics to chemical kinetics of finite-size reactors such as biocells. PMID:26049450
Kryvohuz, Maksym Mukamel, Shaul
2015-06-07
Generalized nonlinear response theory is presented for stochastic dynamical systems. Experiments in which multiple measurements of dynamical quantities are used along with multiple perturbations of parameters of dynamical systems are described by generalized response functions (GRFs). These constitute a new type of multidimensional measures of stochastic dynamics either in the time or the frequency domains. Closed expressions for GRFs in stochastic dynamical systems are derived and compared with numerical non-equilibrium simulations. Several types of perturbations are considered: impulsive and periodic perturbations of temperature and impulsive perturbations of coordinates. The present approach can be used to study various types of stochastic processes ranging from single-molecule conformational dynamics to chemical kinetics of finite-size reactors such as biocells.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Consentaneous Agent-Based and Stochastic Model of the Financial Markets
Gontis, Vygintas; Kononovicius, Aleksejus
2014-01-01
We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation. PMID:25029364
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.
A stochastic approach for quantifying immigrant integration: the Spanish test case
NASA Astrophysics Data System (ADS)
Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia
2014-10-01
We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo
2012-11-01
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Tulzer, Gerhard; Heitzinger, Clemens
2016-04-22
In this work, we develop a 2D algorithm for stochastic reaction-diffusion systems describing the binding and unbinding of target molecules at the surfaces of affinity-based sensors. In particular, we simulate the detection of DNA oligomers using silicon-nanowire field-effect biosensors. Since these devices are uniform along the nanowire, two dimensions are sufficient to capture the kinetic effects features. The model combines a stochastic ordinary differential equation for the binding and unbinding of target molecules as well as a diffusion equation for their transport in the liquid. A Brownian-motion based algorithm simulates the diffusion process, which is linked to a stochastic-simulation algorithm for association at and dissociation from the surface. The simulation data show that the shape of the cross section of the sensor yields areas with significantly different target-molecule coverage. Different initial conditions are investigated as well in order to aid rational sensor design. A comparison of the association/hybridization behavior for different receptor densities allows optimization of the functionalization setup depending on the target-molecule density. PMID:26939610
Brownian-motion based simulation of stochastic reaction-diffusion systems for affinity based sensors
NASA Astrophysics Data System (ADS)
Tulzer, Gerhard; Heitzinger, Clemens
2016-04-01
In this work, we develop a 2D algorithm for stochastic reaction-diffusion systems describing the binding and unbinding of target molecules at the surfaces of affinity-based sensors. In particular, we simulate the detection of DNA oligomers using silicon-nanowire field-effect biosensors. Since these devices are uniform along the nanowire, two dimensions are sufficient to capture the kinetic effects features. The model combines a stochastic ordinary differential equation for the binding and unbinding of target molecules as well as a diffusion equation for their transport in the liquid. A Brownian-motion based algorithm simulates the diffusion process, which is linked to a stochastic-simulation algorithm for association at and dissociation from the surface. The simulation data show that the shape of the cross section of the sensor yields areas with significantly different target-molecule coverage. Different initial conditions are investigated as well in order to aid rational sensor design. A comparison of the association/hybridization behavior for different receptor densities allows optimization of the functionalization setup depending on the target-molecule density.
Li, Xiao Ji, Guanghua Zhang, Hui
2015-02-15
We use the stochastic Cahn–Hilliard equation to simulate the phase transitions of the macromolecular microsphere composite (MMC) hydrogels under a random disturbance. Based on the Flory–Huggins lattice model and the Boltzmann entropy theorem, we develop a reticular free energy suit for the network structure of MMC hydrogels. Taking the random factor into account, with the time-dependent Ginzburg-Landau (TDGL) mesoscopic simulation method, we set up a stochastic Cahn–Hilliard equation, designated herein as the MMC-TDGL equation. The stochastic term in the equation is constructed appropriately to satisfy the fluctuation-dissipation theorem and is discretized on a spatial grid for the simulation. A semi-implicit difference scheme is adopted to numerically solve the MMC-TDGL equation. Some numerical experiments are performed with different parameters. The results are consistent with the physical phenomenon, which verifies the good simulation of the stochastic term.
Wallace, Chris; Cutler, Antony J; Pontikos, Nikolas; Pekalski, Marcin L; Burren, Oliver S; Cooper, Jason D; García, Arcadio Rubio; Ferreira, Ricardo C; Guo, Hui; Walker, Neil M; Smyth, Deborah J; Rich, Stephen S; Onengut-Gumuscu, Suna; Sawcer, Stephen J; Ban, Maria
2015-01-01
Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD) and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS) and type 1 diabetes (T1D) associations in the IL-2RA (CD25) gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3) and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data. PMID:26106896
Wallace, Chris; Cutler, Antony J; Pontikos, Nikolas; Pekalski, Marcin L; Burren, Oliver S; Cooper, Jason D; García, Arcadio Rubio; Ferreira, Ricardo C; Guo, Hui; Walker, Neil M; Smyth, Deborah J; Rich, Stephen S; Onengut-Gumuscu, Suna; Sawcer, Stephen J; Ban, Maria; Richardson, Sylvia; Todd, John A; Wicker, Linda S
2015-06-01
Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD) and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS) and type 1 diabetes (T1D) associations in the IL-2RA (CD25) gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3) and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data. PMID:26106896
A binomial stochastic kinetic approach to the Michaelis-Menten mechanism
NASA Astrophysics Data System (ADS)
Lente, Gábor
2013-05-01
This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.
A wavelet approach for development and application of a stochastic parameter simulation system
NASA Astrophysics Data System (ADS)
Miron, Adrian
2001-07-01
In this research a Stochastic Parameter Simulation System (SPSS) computer program employing wavelet techniques was developed. The SPSS was designed to fulfill two key functional requirements: (1) To be able to analyze any steady state plant signal, decompose it into its deterministic and stochastic components, and then reconstruct a new, simulated signal that possesses exactly the same statistical noise characteristics as the actual signal; and (2) To be able to filter out the principal serially-correlated, deterministic components from the analyzed signal so that the remaining stochastic signal can be analyzed with signal validation tools that are designed for signals drawn from independent random distributions. The results obtained using SPSS were compared to those obtained using the Argonne National Laboratory Reactor Parameter Simulation System (RPSS) which uses a Fourier transform methodology to achieve the same objectives. RPSS and SPSS results were compared for three sets of stationary signals, representing sensor readings independently recorded at three nuclear power plants. For all of the recorded signals, the wavelet technique provided a better approximation of the original signal than the Fourier procedure. For each signal, many wavelet-based decompositions were found by the SPSS methodology, all of which produced white and normally distributed signal residuals. In most cases, the Fourier-based analysis failed to completely eliminate the original signal serial-correlation from the residuals. The reconstructed signals produced by SPSS are also statistically closer to the original signal than the RPSS reconstructed signal. Another phase of the research demonstrated that SPSS could be used to enhance the reliability of the Multivariate Sensor Estimation Technique (MSET). MSET uses the Sequential Probability Ratio Test (SPRT) for its fault detection algorithm. By eliminating the MSET residual serial-correlation in the MSET training phase, the SPRT user
NASA Astrophysics Data System (ADS)
Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.
2008-10-01
SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.
Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
NASA Astrophysics Data System (ADS)
Sari, Mehmet; Ghasemi, Ebrahim; Ataei, Mohammad
2014-03-01
Backbreak is an undesirable side effect of bench blasting operations in open pit mines. A large number of parameters affect backbreak, including controllable parameters (such as blast design parameters and explosive characteristics) and uncontrollable parameters (such as rock and discontinuities properties). The complexity of the backbreak phenomenon and the uncertainty in terms of the impact of various parameters makes its prediction very difficult. The aim of this paper is to determine the suitability of the stochastic modeling approach for the prediction of backbreak and to assess the influence of controllable parameters on the phenomenon. To achieve this, a database containing actual measured backbreak occurrences and the major effective controllable parameters on backbreak (i.e., burden, spacing, stemming length, powder factor, and geometric stiffness ratio) was created from 175 blasting events in the Sungun copper mine, Iran. From this database, first, a new site-specific empirical equation for predicting backbreak was developed using multiple regression analysis. Then, the backbreak phenomenon was simulated by the Monte Carlo (MC) method. The results reveal that stochastic modeling is a good means of modeling and evaluating the effects of the variability of blasting parameters on backbreak. Thus, the developed model is suitable for practical use in the Sungun copper mine. Finally, a sensitivity analysis showed that stemming length is the most important parameter in controlling backbreak.
A stochastic context free grammar based framework for analysis of protein sequences
Dyrka, Witold; Nebel, Jean-Christophe
2009-01-01
Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free Grammar based framework has been
Water resources planning and management : A stochastic dual dynamic programming approach
NASA Astrophysics Data System (ADS)
Goor, Q.; Pinte, D.; Tilmant, A.
2008-12-01
Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.
2009-12-01
Fractures and fracture networks are the principal pathways for transport of water and contaminants in groundwater systems, enhanced geothermal system fluids, migration of oil and gas, carbon dioxide leakage from carbon sequestration sites, and of radioactive and toxic industrial wastes from underground storage repositories. A major issue to overcome when characterizing a fractured reservoir is that of data limitation due to accessibility and affordability. Moreover, the ability to map discontinuities in the rock with available geological and geophysical tools tends to decrease particularly as the scale of the discontinuity goes down. Geological characterization data include measurements of fracture density, orientation, extent, and aperture, and are based on analysis of outcrops, borehole optical and acoustic televiewer logs, aerial photographs, and core samples, among other techniques. All of these measurements are taken at the field scale through a very sparse limited number of deep boreholes. These types of data are often reduced to probability distribution functions for predictive modeling and simulation in a stochastic framework such as a stochastic discrete fracture network. Stochastic discrete fracture network models enable, through Monte Carlo realizations and simulations, probabilistic assessment of flow and transport phenomena that are not adequately captured using continuum models. Despite the fundamental uncertainties inherited within the probabilistic reduction of the sparse data collected, very little work has been conducted on quantifying uncertainty on the reduced probabilistic distribution functions. In the current study, using nested Monte Carlo simulations, we present the impact of parameter uncertainties of the distribution functions of fracture density, orientation, aperture and size on the flow and transport using topological measures such as fracture connectivity, physical characteristics such as effective hydraulic conductivity tensors, and
The impact of trade costs on rare earth exports : a stochastic frontier estimation approach.
Sanyal, Prabuddha; Brady, Patrick Vane; Vugrin, Eric D.
2013-09-01
The study develops a novel stochastic frontier modeling approach to the gravity equation for rare earth element (REE) trade between China and its trading partners between 2001 and 2009. The novelty lies in differentiating betweenbehind the border' trade costs by China and theimplicit beyond the border costs' of China's trading partners. Results indicate that the significance level of the independent variables change dramatically over the time period. While geographical distance matters for trade flows in both periods, the effect of income on trade flows is significantly attenuated, possibly capturing the negative effects of financial crises in the developed world. Second, the total export losses due tobehind the border' trade costs almost tripled over the time period. Finally, looking atimplicit beyond the border' trade costs, results show China gaining in some markets, although it is likely that some countries are substituting away from Chinese REE exports.
NASA Astrophysics Data System (ADS)
Vardaci, E.; Nadtochy, P. N.; Di Nitto, A.; Brondi, A.; La Rana, G.; Moro, R.; Rath, P. K.; Ashaduzzaman, M.; Kozulin, E. M.; Knyazheva, G. N.; Itkis, I. M.; Cinausero, M.; Prete, G.; Fabris, D.; Montagnoli, G.; Gelli, N.
2015-09-01
The system of intermediate fissility 132Ce has been studied experimentally and theoretically to investigate the dissipation properties of nuclear matter. Cross sections of fusion-fission and evaporation-residue channels together with light charged particle multiplicities in both channels, their spectra, light charged particle-evaporation residue angular correlations, and mass-energy distribution of fission fragments have been measured. Theoretical analysis has been performed using a multidimensional stochastic approach coupled with a Hauser-Feshbach treatment of particle evaporation. The main conclusions are that the full one-body shape-dependent dissipation mechanism allows the reproduction of the full set of experimental data and that after a time τd=5 ×10-21 s from the equilibrium configuration of the compound nucleus, fission decay can occur in a time that can span several orders of magnitude.
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators
2016-01-01
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew
2016-08-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of
NASA Astrophysics Data System (ADS)
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J. Andrew
2016-08-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators.
Woods, Mae L; Leon, Miriam; Perez-Carrasco, Ruben; Barnes, Chris P
2016-06-17
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
NASA Astrophysics Data System (ADS)
Vrettas, M. D.; Fung, I. Y.
2014-12-01
The degree of carbon climate feedback by terrestrial ecosystems is intimately tied to the availability of moisture for photosynthesis, transpiration and decomposition. The vertical distribution of subsurface moisture and its accessibility for evapotranspiration is a key determinant of the fate of ecosystems and their feedback on the climate system. A time series of five years of high frequency (every 30 min) observations of water table at a research site in Northern California shows that the water tables, 18 meters below the surface, can respond in less than 8 hours to the first winter rains, suggesting very fast flow through micro-pores and fractured bedrock. Not quite as quickly as the water table rises after a heavy rain, the elevated water level recedes, contributing to down-slope flow and stream flow. The governing equation of our model uses the well-known Richards' equation, which is a non-linear PDE, derived by applying the continuity requirement to Darcy's law. The most crucial parameter of this PDE is the hydraulic conductivity K(θ), which describes the speed at which water can move in the underground. We specify a saturation profile as a function of depth (i.e. Ksat(z)) and allow K(θ) to vary not only with the soil moisture saturation but also include a stochastic component which mimics the effects of fracture flow and other naturally occurring heterogeneity, that is evident in the subsurface. A large number of Monte Carlo simulation are performed in order to identify optimal settings for the new model, as well as analyze the results of this new approach on the available data. Initial findings from this exploratory work are encouraging and the next steps include testing this new stochastic approach on data from other sites and also apply ensemble based data assimilation algorithms in order to estimate model parameters with the available measurements.
NASA Astrophysics Data System (ADS)
Braun, Jean; Deal, Eric; Andermann, Christoff
2015-04-01
The influence of climate on surface processes and consequently on landscape evolution is undeniably important; despite this, many fluvial landscape evolution models do not integrate an accurate or physically based parameterisation of precipitation, the climatic forcing most important for fluvial processes. This is likely due to two major challenges; first of all there is the difficulty in moving from the hourly, daily and monthly timescales most relevant to precipitation to the millennial timescales used in landscape evolution modelling. To confront this challenge, we adopt the approach of Tucker and Bras, 2000 and Lague, 2005, and upscale precipitation with a statistical parameterisation accounting for mean precipitation as well as short term (daily) variability. This technique is key in capturing and quantifying the importance of rare, extreme events. The second challenge stems from the fact that erosion rates are proportional not to precipitation, but rather to discharge, which results from a complex convolution of the regional precipitation patterns with the landscape. To address this second obstacle we present work that investigates the relationship between a stochastic description of precipitation and one of discharge, linking general patterns of precipitation and discharge rather than attempting to establish a deterministic relationship. To achieve this we model the effect of precipitation variability on runoff variability as well as compare associated precipitation and discharge measurements from a range of climatic regimes and spatial scales in the Himalayas. Using the results of this work, we integrate the statistical parameterisation of precipitation into a landscape evolution model, allowing us to explore the effect of realistic precipitation patterns, specifically precipitation variability, on the evolution of relief and topography. References Bras, R. L., & Tucker, G. E. (2000). A stochastic approach to modeling the role of rainfall variability in
Hahl, Sayuri K.; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
Billari, Francesco C; Graziani, Rebecca; Melilli, Eugenio
2014-10-01
This article suggests a procedure to derive stochastic population forecasts adopting an expert-based approach. As in previous work by Billari et al. (2012), experts are required to provide evaluations, in the form of conditional and unconditional scenarios, on summary indicators of the demographic components determining the population evolution: that is, fertility, mortality, and migration. Here, two main purposes are pursued. First, the demographic components are allowed to have some kind of dependence. Second, as a result of the existence of a body of shared information, possible correlations among experts are taken into account. In both cases, the dependence structure is not imposed by the researcher but rather is indirectly derived through the scenarios elicited from the experts. To address these issues, the method is based on a mixture model, within the so-called Supra-Bayesian approach, according to which expert evaluations are treated as data. The derived posterior distribution for the demographic indicators of interest is used as forecasting distribution, and a Markov chain Monte Carlo algorithm is designed to approximate this posterior. This article provides the questionnaire designed by the authors to collect expert opinions. Finally, an application to the forecast of the Italian population from 2010 to 2065 is proposed. PMID:25124024
Stochastic Modeling of Usage Patterns in a Web-Based Information System.
ERIC Educational Resources Information Center
Chen, Hui-Min; Cooper, Michael D.
2002-01-01
Uses continuous-time stochastic models, mainly based on semi-Markov chains, to derive user state transition patterns, both in rates and in probabilities, in a Web-based information system. Describes search sessions from transaction logs of the University of California's MELVYL library catalog system and discusses sequential dependency. (Author/LRW)
Binomial distribution based τ-leap accelerated stochastic simulation
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2005-01-01
Recently, Gillespie introduced the τ-leap approximate, accelerated stochastic Monte Carlo method for well-mixed reacting systems [J. Chem. Phys. 115, 1716 (2001)]. In each time increment of that method, one executes a number of reaction events, selected randomly from a Poisson distribution, to enable simulation of long times. Here we introduce a binomial distribution τ-leap algorithm (abbreviated as BD-τ method). This method combines the bounded nature of the binomial distribution variable with the limiting reactant and constrained firing concepts to avoid negative populations encountered in the original τ-leap method of Gillespie for large time increments, and thus conserve mass. Simulations using prototype reaction networks show that the BD-τ method is more accurate than the original method for comparable coarse-graining in time.
NASA Astrophysics Data System (ADS)
Gomez Martínez, S. P.; da Silva, L. F. P.; Madriz Aguilar, J. E.; Bellini, M.
2007-08-01
We develop an stochastic approach to study gravitational waves produced during the inflationary epoch under the presence of a decaying cosmological parameter, on a 5D geometrical background which is Riemann flat. We obtain that the squared tensor metric fluctuations depend strongly on the cosmological parameter $\\Lambda (t)$ and we finally illustrate the formalism with an example of a decaying $\\Lambda(t)$.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi
Hu, Yan; Wen, Jing-Ya; Li, Xiao-Li; Wang, Da-Zhou; Li, Yu
2013-10-15
A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach was developed for contaminated sites management. The contaminant concentrations were simulated by a validated interval dynamic multimedia fugacity model, and different guideline values for the same contaminant were represented as a fuzzy environmental guideline. Then, the probability of violating environmental guideline (Pv) can be determined by comparison between the modeled concentrations and the fuzzy environmental guideline, and the constructed relationship between the Pvs and environmental risk levels was used to assess the environmental risk level. The developed approach was applied to assess the integrated environmental risk at a case study site in China, simulated from 1985 to 2020. Four scenarios were analyzed, including "residential land" and "industrial land" environmental guidelines under "strict" and "loose" strictness. It was found that PAH concentrations will increase steadily over time, with soil found to be the dominant sink. Source emission in soil was the leading input and atmospheric sedimentation was the dominant transfer process. The integrated environmental risks primarily resulted from petroleum spills and coke ovens, while the soil environmental risks came from coal combustion. The developed approach offers an effective tool for quantifying variability and uncertainty in the dynamic multimedia integrated environmental risk assessment and the contaminated site management. PMID:23995555
A new stochastic approach for the simulation of agglomeration between colloidal particles.
Henry, Christophe; Minier, Jean-Pierre; Pozorski, Jacek; Lefèvre, Grégory
2013-11-12
This paper presents a stochastic approach for the simulation of particle agglomeration, which is addressed as a two-step process: first, particles are transported by the flow toward each other (collision step) and, second, short-ranged particle-particle interactions lead either to the formation of an agglomerate or prevent it (adhesion step). Particle collisions are treated in the framework of Lagrangian approaches where the motions of a large number of particles are explicitly tracked. The key idea to detect collisions is to account for the whole continuous relative trajectory of particle pairs within each time step and not only the initial and final relative distances between two possible colliding partners at the beginning and at the end of the time steps. The present paper is thus the continuation of a previous work (Mohaupt M., Minier, J.-P., Tanière, A. A new approach for the detection of particle interactions for large-inertia and colloidal particles in a turbulent flow, Int. J. Multiphase Flow, 2011, 37, 746-755) and is devoted to an extension of the approach to the treatment of particle agglomeration. For that purpose, the attachment step is modeled using the DLVO theory (Derjaguin and Landau, Verwey and Overbeek) which describes particle-particle interactions as the sum of van der Waals and electrostatic forces. The attachment step is coupled with the collision step using a common energy balance approach, where particles are assumed to agglomerate only if their relative kinetic energy is high enough to overcome the maximum repulsive interaction energy between particles. Numerical results obtained with this model are shown to compare well with available experimental data on agglomeration. These promising results assert the applicability of the present modeling approach over a whole range of particle sizes (even nanoscopic) and solution conditions (both attractive and repulsive cases). PMID:24111685
NASA Astrophysics Data System (ADS)
Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.
2008-07-01
We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.
Stochastic modeling of rainfall
Guttorp, P.
1996-12-31
We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.
Stochastic dynamics of charge fluctuations in dusty plasma: A non-Markovian approach
Asgari, H.; Muniandy, S. V.; Wong, C. S.
2011-08-15
Dust particles in typical laboratory plasma become charged largely by collecting electrons and/or ions. Most of the theoretical studies in dusty plasma assume that the grain charge remains constant even though it fluctuates due to the discrete nature of the charge. The rates of ions and electrons absorption depend on the grain charge, hence its temporal evolution. Stochastic charging model based on the standard Langevin equation assumes that the underlying process is Markovian. In this work, the memory effect in dust charging dynamics is incorporated using fractional calculus formalism. The resulting fractional Langevin equation is solved to obtain the amplitude and correlation function for the dust charge fluctuation. It is shown that the effects of ion-neutral collisions can be interpreted in phenomenological sense through the nonlocal fractional order derivative.
Stochastic approach to correlations beyond the mean field with the Skyrme interaction
NASA Astrophysics Data System (ADS)
Fukuoka, Y.; Nakatsukasa, T.; Funaki, Y.; Yabana, K.
2012-10-01
Large-scale calculation based on the multi-configuration Skyrme density functional theory is performed for the light N = Z even-even nucleus, 12C. Stochastic procedures and the imaginary-time evolution are utilized to prepare many Slater determinants. Each state is projected on eigenstates of parity and angular momentum. Then, performing the configuration mixing calculation with the Skyrme Hamiltonian, we obtain low-lying energy-eigenstates and their explicit wave functions. The generated wave functions are completely free from any assumption and symmetry restriction. Excitation spectra and transition probabilities are well reproduced, not only for the ground-state band, but for negative-parity excited states and the Hoyle state.
NASA Astrophysics Data System (ADS)
Dong, Cong; Huang, Guohe; Tan, Qian; Cai, Yanpeng
2014-03-01
Water resources are fundamental for support of regional development. Effective planning can facilitate sustainable management of water resources to balance socioeconomic development and water conservation. In this research, coupled planning of water resources and agricultural land use was undertaken through the development of an inexact-stochastic programming approach. Such an inexact modeling approach was the integration of interval linear programming and chance-constraint programming methods. It was employed to successfully tackle uncertainty in the form of interval numbers and probabilistic distributions existing in water resource systems. Then it was applied to a typical regional water resource system for demonstrating its applicability and validity through generating efficient system solutions. Based on the process of modeling formulation and result analysis, the developed model could be used for helping identify optimal water resource utilization patterns and the corresponding agricultural land-use schemes in three sub-regions. Furthermore, a number of decision alternatives were generated under multiple water-supply conditions, which could help decision makers identify desired management policies.
Stochastic investigation of two-dimensional cross sections of rocks based on the climacogram
NASA Astrophysics Data System (ADS)
Kalamioti, Anna; Dimitriadis, Panayiotis; Tzouka, Katerina; Lerias, Eleutherios; Koutsoyiannis, Demetris
2016-04-01
The statistical properties of soil and rock formations are essential for the characterization of the porous medium geological structure as well as for the prediction of its transport properties in groundwater modelling. We investigate two-dimensional cross sections of rocks in terms of stochastic structure of its morphology quantified by the climacogram (i.e., variance of the averaged process vs. scale). The analysis is based both in microscale and macroscale data, specifically from Scanning Electron Microscope (SEM) pictures and from field photos, respectively. We identify and quantify the stochastic properties with emphasis on the large scale type of decay (exponentially or power type, else known as Hurst-Kolmogorov behaviour). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Chen, Bor-Sen; Chang, Yu-Te; Wang, Yu-Chao
2008-02-01
Molecular noises in gene networks come from intrinsic fluctuations, transmitted noise from upstream genes, and the global noise affecting all genes. Knowledge of molecular noise filtering in gene networks is crucial to understand the signal processing in gene networks and to design noise-tolerant gene circuits for synthetic biology. A nonlinear stochastic dynamic model is proposed in describing a gene network under intrinsic molecular fluctuations and extrinsic molecular noises. The stochastic molecular-noise-processing scheme of gene regulatory networks for attenuating these molecular noises is investigated from the nonlinear robust stabilization and filtering perspective. In order to improve the robust stability and noise filtering, a robust gene circuit design for gene networks is proposed based on the nonlinear robust H infinity stochastic stabilization and filtering scheme, which needs to solve a nonlinear Hamilton-Jacobi inequality. However, in order to avoid solving these complicated nonlinear stabilization and filtering problems, a fuzzy approximation method is employed to interpolate several linear stochastic gene networks at different operation points via fuzzy bases to approximate the nonlinear stochastic gene network. In this situation, the method of linear matrix inequality technique could be employed to simplify the gene circuit design problems to improve robust stability and molecular-noise-filtering ability of gene networks to overcome intrinsic molecular fluctuations and extrinsic molecular noises. PMID:18270080
Beam Based Measurements for Stochastic Cooling Systems at Fermilab
Lebedev, V.A.; Pasquinelli, R.J.; Werkema, S.J.; /Fermilab
2007-09-13
Improvement of antiproton stacking rates has been pursued for the last twenty years at Fermilab. The last twelve months have been dedicated to improving the computer model of the Stacktail system. The production of antiprotons encompasses the use of the entire accelerator chain with the exception of the Tevatron. In the Antiproton Source two storage rings, the Debuncher and Accumulator are responsible for the accumulation of antiprotons in quantities that can exceed 2 x 10{sup 12}, but more routinely, stacks of 5 x 10{sup 11} antiprotons are accumulated before being transferred to the Recycler ring. Since the beginning of this recent enterprise, peak accumulation rates have increased from 2 x 10{sup 11} to greater than 2.3 x 10{sup 11} antiprotons per hour. A goal of 3 x 10{sup 11} per hour has been established. Improvements to the stochastic cooling systems are but a part of this current effort. This paper will discuss Stacktail system measurements and experienced system limitations.
A simplified BBGKY hierarchy for correlated fermions from a stochastic mean-field approach
NASA Astrophysics Data System (ADS)
Lacroix, Denis; Tanimura, Yusuke; Ayik, Sakir; Yilmaz, Bulent
2016-04-01
The stochastic mean-field (SMF) approach allows to treat correlations beyond mean-field using a set of independent mean-field trajectories with appropriate choice of fluctuating initial conditions. We show here that this approach is equivalent to a simplified version of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy between one-, two-, ..., N -body degrees of freedom. In this simplified version, one-body degrees of freedom are coupled to fluctuations to all orders while retaining only specific terms of the general BBGKY hierarchy. The use of the simplified BBGKY is illustrated with the Lipkin-Meshkov-Glick (LMG) model. We show that a truncated version of this hierarchy can be useful, as an alternative to the SMF, especially in the weak coupling regime to get physical insight in the effect beyond mean-field. In particular, it leads to approximate analytical expressions for the quantum fluctuations both in the weak and strong coupling regime. In the strong coupling regime, it can only be used for short time evolution. In that case, it gives information on the evolution time-scale close to a saddle point associated to a quantum phase-transition. For long time evolution and strong coupling, we observed that the simplified BBGKY hierarchy cannot be truncated and only the full SMF with initial sampling leads to reasonable results.
NASA Astrophysics Data System (ADS)
Hatfield, C.; Shao, N.
2005-05-01
At the level of the watershed, the riverine habitat represents a spatially distributed yet interconnected landscape element. The spatial organization of the riverine landscape determines the distribution and extent of habitats and the interconnectivity influences how species access riverine habitat elements. A central question is what and how characteristics of a species affect its performance in the context of a river network. We used a spatially explicit, stochastic simulation modeling approach to explore how interconnectivity and complexity of the stream network potentially interacts with life history traits in determining riparian plant species persistence and abundance. We varied life history traits and stream network complexity in a factorial design. For each factorial combination, a new species was introduced to an established riparian community. We evaluated the new species and the community responses using various metrics including rate of spread and abundance. Interaction strengths varied between different life history traits depending on network complexity, but persistence and success of a new species was determined by the combination species life history traits not a single or combination of a few traits. This work underscores the need to better understand life histories using multiple pathways of investigation including models, field and experimental approaches.
Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O
2016-06-01
Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data. PMID:27165151
Collignon, Bertrand; Séguret, Axel; Halloy, José
2016-01-01
Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173
Collignon, Bertrand; Séguret, Axel; Halloy, José
2016-01-01
Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.
High-order distance-based multiview stochastic learning in image classification.
Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng
2014-12-01
How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification. PMID:25415948
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
Growth of aerosols in Titan's atmosphere and related time scales - A stochastic approach
NASA Astrophysics Data System (ADS)
Rannou, P.; Cabane, M.; Chassefiere, E.
1993-05-01
The evolution of Titan's aerosols is studied from their production altitude down to the ground using a stochastic approach. A background aerosol distribution is assumed, obtained from previous Eulerian modelling. and the evolution of a 'tagged' particle, released near the formation altitude, is followed by simulating in a random way its growth through coagulation with particles of the background distribution. The two distinct growth stages proposed by Cabane et al. (1992) to explain the formation of monomers and subsequent aggregates are confirmed. The first stage may be divided into two parts. First, within roughly one terrestrial day, particles grow mainly through collisions with larger particles. They reach the size of monomer through typically one to five such collisions. Second, within a few terrestrial days to roughly one terrestrial month, particles evolve mainly by collisions with continuously created small particles and acquire their compact spherical structure. In the second stage, whose duration is roughly 30 terrestrial years, or one Titan's seasonal cycle, particles grow by cluster-cluster aggregation during their fall through the atmosphere and reach, at low stratospheric levels, a typical radius of 0.4-0.5 micron.
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
Evolutionary dynamics of imatinib-treated leukemic cells by stochastic approach
NASA Astrophysics Data System (ADS)
Pizzolato, Nicola; Valenti, Davide; Adorno, Dominique; Spagnolo, Bernardo
2009-09-01
The evolutionary dynamics of a system of cancerous cells in a model of chronic myeloid leukemia (CML) is investigated by a statistical approach. Cancer progression is explored by applying a Monte Carlo method to simulate the stochastic behavior of cell reproduction and death in a population of blood cells which can experience genetic mutations. In CML front line therapy is represented by the tyrosine kinase inhibitor imatinib which strongly affects the reproduction of leukemic cells only. In this work, we analyze the effects of a targeted therapy on the evolutionary dynamics of normal, first-mutant and cancerous cell populations. Several scenarios of the evolutionary dynamics of imatinib-treated leukemic cells are described as a consequence of the efficacy of the different modelled therapies. We show how the patient response to the therapy changes when a high value of the mutation rate from healthy to cancerous cells is present. Our results are in agreement with clinical observations. Unfortunately, development of resistance to imatinib is observed in a fraction of patients, whose blood cells are characterized by an increasing number of genetic alterations. We find that the occurrence of resistance to the therapy can be related to a progressive increase of deleterious mutations.
NASA Astrophysics Data System (ADS)
Silveri, M.; Zalys-Geller, E.; Hatridge, M.; Leghtas, Z.; Devoret, M. H.; Girvin, S. M.
2015-03-01
In the remote entanglement process, two distant stationary qubits are entangled with separate flying qubits and the which-path information is erased from the flying qubits by interference effects. As a result, an observer cannot tell from which of the two sources a signal came and the probabilistic measurement process generates perfect heralded entanglement between the two signal sources. Notably, the two stationary qubits are spatially separated and there is no direct interaction between them. We study two transmon qubits in superconducting cavities connected to a Josephson Parametric Converter (JPC). The qubit information is encoded in the traveling wave leaking out from each cavity. Remarkably, the quantum-limited phase-preserving amplification of two traveling waves provided by the JPC can work as a which-path information eraser. By using a stochastic master approach we demonstrate the probabilistic production of heralded entangled states and that unequal qubit-cavity pairs can be made indistinguishable by simple engineering of driving fields. Additionally, we will derive measurement rates, measurement optimization strategies and discuss the effects of finite amplification gain, cavity losses, and qubit relaxations and dephasing. Work supported by IARPA, ARO and NSF.
Localized dynamic kinetic-energy-based models for stochastic coherent adaptive large eddy simulation
NASA Astrophysics Data System (ADS)
De Stefano, Giuliano; Vasilyev, Oleg V.; Goldstein, Daniel E.
2008-04-01
Stochastic coherent adaptive large eddy simulation (SCALES) is an extension of the large eddy simulation approach in which a wavelet filter-based dynamic grid adaptation strategy is employed to solve for the most "energetic" coherent structures in a turbulent field while modeling the effect of the less energetic background flow. In order to take full advantage of the ability of the method in simulating complex flows, the use of localized subgrid-scale models is required. In this paper, new local dynamic one-equation subgrid-scale models based on both eddy-viscosity and non-eddy-viscosity assumptions are proposed for SCALES. The models involve the definition of an additional field variable that represents the kinetic energy associated with the unresolved motions. This way, the energy transfer between resolved and residual flow structures is explicitly taken into account by the modeling procedure without an equilibrium assumption, as in the classical Smagorinsky approach. The wavelet-filtered incompressible Navier-Stokes equations for the velocity field, along with the additional evolution equation for the subgrid-scale kinetic energy variable, are numerically solved by means of the dynamically adaptive wavelet collocation solver. The proposed models are tested for freely decaying homogeneous turbulence at Reλ=72. It is shown that the SCALES results, obtained with less than 0.5% of the total nonadaptive computational nodes, closely match reference data from direct numerical simulation. In contrast to classical large eddy simulation, where the energetic small scales are poorly simulated, the agreement holds not only in terms of global statistical quantities but also in terms of spectral distribution of energy and, more importantly, enstrophy all the way down to the dissipative scales.
For airborne toxic particles, the stochastic intake (SI) paradigm involves relativelylow numbers of particles that are presented for inhalation. Each person at risk may inhale adifferent number of particles, including zero particles. For such exposure scenarios, probabilistic d...
Random Walk-Based Solution to Triple Level Stochastic Point Location Problem.
Jiang, Wen; Huang, De-Shuang; Li, Shenghong
2016-06-01
This paper considers the stochastic point location (SPL) problem as a learning mechanism trying to locate a point on a real line via interacting with a random environment. Compared to the stochastic environment in the literatures that confines the learning mechanism to moving in two directions, i.e., left or right, this paper introduces a general triple level stochastic environment which not only tells the learning mechanism to go left or right, but also informs it to stay unmoved. It is easy to understand, as we will prove in this paper, that the environment reported in the previous literatures is just a special case of the triple level environment. And a new learning algorithm, named as random walk-based triple level learning algorithm, is proposed to locate an unknown point under this new type of environment. In order to examine the performance of this algorithm, we divided the triple level SPL problems into four distinguished scenarios by the properties of the unknown point and the stochastic environment, and proved that even under the triple level nonstationary environment and the convergence condition having not being satisfied for some time, which are rarely considered in existing SPL problems, the proposed learning algorithm is still working properly whenever the unknown point is static or evolving with time. Extensive experiments validate our theoretical analyses and demonstrate that the proposed learning algorithms are quite effective and efficient. PMID:26168455
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2015-09-01
Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids. PMID:25532191
Control of confidence domains in the problem of stochastic attractors synthesis
Bashkirtseva, Irina
2015-03-10
A nonlinear stochastic control system is considered. We discuss a problem of the synthesis of stochastic attractors and suggest a constructive approach based on the design of the stochastic sensitivity and corresponding confidence domains. Details of this approach are demonstrated for the problem of the control of confidence ellipses near the equilibrium. An example of the control for stochastic Van der Pol equation is presented.
NASA Astrophysics Data System (ADS)
Murphy, Shane; Scala, Antonio; Lorito, Stefano; Herrero, Andre; Festa, Gaetano; Nielsen, Stefan; Trasatti, Elisa; Tonini, Roberto; Romano, Fabrizio; Molinari, Irene
2016-04-01
Stochastic slip modelling based on general scaling features with uniform slip probability over the fault plane is commonly employed in tsunami and seismic hazard. However, dynamic rupture effects driven by specific fault geometry and frictional conditions can potentially control the slip probability. Unfortunately dynamic simulations can be computationally intensive, preventing their extensive use for hazard analysis. The aim of this study is to produce a computationally efficient stochastic model that incorporates slip features observed in dynamic simulations. Dynamic rupture simulations are performed along a transect representing an average along-depth profile on the Tohoku subduction interface. The surrounding media, effective normal stress and friction law are simplified. Uncertainty in the nucleation location and pre-stress distribution are accounted for by using randomly located nucleation patches and stochastic pre-stress distributions for 500 simulations. The 1D slip distributions are approximated as moment magnitudes on the fault plane based on empirical scaling laws with the ensemble producing a magnitude range of 7.8 - 9.6. To measure the systematic spatial slip variation and its dependence on earthquake magnitude we introduce the concept of the Slip Probability density Function (SPF). We find that while the stochastic SPF is magnitude invariant, the dynamically derived SPF is magnitude-dependent and shows pronounced slip amplification near the surface for M > 8.6 events. To incorporate these dynamic features in the stochastic source models, we sub-divide the dynamically derived SPFs into 0.2 magnitude bins and compare them with the stochastic SPF in order to generate a depth and magnitude dependent transfer function. Applying this function to the traditional stochastic slip distribution allows for an approximated but efficient incorporation of regionally specific dynamic features in a modified source model, to be used specifically when a significant
Quantification of Hepatitis C Virus Cell-to-Cell Spread Using a Stochastic Modeling Approach
Martin, Danyelle N.; Perelson, Alan S.; Dahari, Harel
2015-01-01
ABSTRACT It has been proposed that viral cell-to-cell transmission plays a role in establishing and maintaining chronic infections. Thus, understanding the mechanisms and kinetics of cell-to-cell spread is fundamental to elucidating the dynamics of infection and may provide insight into factors that determine chronicity. Because hepatitis C virus (HCV) spreads from cell to cell and has a chronicity rate of up to 80% in exposed individuals, we examined the dynamics of HCV cell-to-cell spread in vitro and quantified the effect of inhibiting individual host factors. Using a multidisciplinary approach, we performed HCV spread assays and assessed the appropriateness of different stochastic models for describing HCV focus expansion. To evaluate the effect of blocking specific host cell factors on HCV cell-to-cell transmission, assays were performed in the presence of blocking antibodies and/or small-molecule inhibitors targeting different cellular HCV entry factors. In all experiments, HCV-positive cells were identified by immunohistochemical staining and the number of HCV-positive cells per focus was assessed to determine focus size. We found that HCV focus expansion can best be explained by mathematical models assuming focus size-dependent growth. Consistent with previous reports suggesting that some factors impact HCV cell-to-cell spread to different extents, modeling results estimate a hierarchy of efficacies for blocking HCV cell-to-cell spread when targeting different host factors (e.g., CLDN1 > NPC1L1 > TfR1). This approach can be adapted to describe focus expansion dynamics under a variety of experimental conditions as a means to quantify cell-to-cell transmission and assess the impact of cellular factors, viral factors, and antivirals. IMPORTANCE The ability of viruses to efficiently spread by direct cell-to-cell transmission is thought to play an important role in the establishment and maintenance of viral persistence. As such, elucidating the dynamics of cell
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
NASA Astrophysics Data System (ADS)
Kabamba, P. T.; Meerkov, S. M.; Ossareh, H. R.
2015-01-01
This paper considers feedback systems with asymmetric (i.e., non-odd functions) nonlinear actuators and sensors. While the stability of such systems can be investigated using the theory of absolute stability and its extensions, the current paper provides a method for their performance analysis, i.e., reference tracking and disturbance rejection. Similar to the case of symmetric nonlinearities considered in earlier work, the development is based on the method of stochastic linearisation (which is akin to the describing functions, but intended to study general properties of dynamics, rather than periodic regimes). Unlike the symmetric case, however, the nonlinearities considered here must be approximated not only by a quasilinear gain, but a quasilinear bias as well. This paper derives transcendental equations for the quasilinear gain and bias, provides necessary and sufficient conditions for existence of their solutions, and, using simulations, investigates the accuracy of these solutions as a tool for predicting the quality of reference tracking and disturbance rejection. The method developed is then applied to performance analysis of specific systems, and the effect of asymmetry on their behaviour is investigated. In addition, this method is used to justify the recently discovered phenomenon of noise-induced loss of tracking in feedback systems with PI controllers, anti-windup, and sensor noise.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601
Hybrid approaches for multiple-species stochastic reaction-diffusion models
NASA Astrophysics Data System (ADS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Ultra-fast data-mining hardware architecture based on stochastic computing.
Morro, Antoni; Canals, Vincent; Oliver, Antoni; Alomar, Miquel L; Rossello, Josep L
2015-01-01
Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274
Ultra-Fast Data-Mining Hardware Architecture Based on Stochastic Computing
Oliver, Antoni; Alomar, Miquel L.
2015-01-01
Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274
Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.
1993-01-01
Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.
Desynchronization of stochastically synchronized chemical oscillators
Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan
2015-12-15
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
Desynchronization of stochastically synchronized chemical oscillators.
Snari, Razan; Tinsley, Mark R; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth
2015-12-01
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed. PMID:26723155
Desynchronization of stochastically synchronized chemical oscillators
NASA Astrophysics Data System (ADS)
Snari, Razan; Tinsley, Mark R.; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth
2015-12-01
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...
FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is wide spread. eal porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. uch a scheme based on the properties ...
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
Baars, Woutijn J.; Tinney, Charles E.
2014-05-15
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimation (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
NASA Astrophysics Data System (ADS)
Baars, Woutijn J.; Tinney, Charles E.
2014-05-01
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimation (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.
NASA Astrophysics Data System (ADS)
Maerker, Michael; Bolus, Michael
2014-05-01
We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less
Pinning distributed synchronization of stochastic dynamical networks: a mixed optimization approach.
Tang, Yang; Gao, Huijun; Lu, Jianquan; Kurths, Jürgen Kurthsrgen
2014-10-01
This paper is concerned with the problem of pinning synchronization of nonlinear dynamical networks with multiple stochastic disturbances. Two kinds of pinning schemes are considered: 1) pinned nodes are fixed along the time evolution and 2) pinned nodes are switched from time to time according to a set of Bernoulli stochastic variables. Using Lyapunov function methods and stochastic analysis techniques, several easily verifiable criteria are derived for the problem of pinning distributed synchronization. For the case of fixed pinned nodes, a novel mixed optimization method is developed to select the pinned nodes and find feasible solutions, which is composed of a traditional convex optimization method and a constraint optimization evolutionary algorithm. For the case of switching pinning scheme, upper bounds of the convergence rate and the mean control gain are obtained theoretically. Simulation examples are provided to show the advantages of our proposed optimization method over previous ones and verify the effectiveness of the obtained results. PMID:25291734
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on
NASA Astrophysics Data System (ADS)
Bianchini, Ilaria; Argiento, Raffaele; Auricchio, Ferdinando; Lanzarone, Ettore
2015-09-01
The great influence of uncertainties on the behavior of physical systems has always drawn attention to the importance of a stochastic approach to engineering problems. Accordingly, in this paper, we address the problem of solving a Finite Element analysis in the presence of uncertain parameters. We consider an approach in which several solutions of the problem are obtained in correspondence of parameters samples, and propose a novel non-intrusive method, which exploits the functional principal component analysis, to get acceptable computational efforts. Indeed, the proposed approach allows constructing an optimal basis of the solutions space and projecting the full Finite Element problem into a smaller space spanned by this basis. Even if solving the problem in this reduced space is computationally convenient, very good approximations are obtained by upper bounding the error between the full Finite Element solution and the reduced one. Finally, we assess the applicability of the proposed approach through different test cases, obtaining satisfactory results.
Comparison of Two Statistical Approaches to a Solution of the Stochastic Radiative Transfer Equation
NASA Astrophysics Data System (ADS)
Kirnos, I. V.; Tarasenkov, M. V.; Belov, V. V.
2016-04-01
The method of direct simulation of photon trajectories in a stochastic medium is compared with the method of closed equations suggested by G. A. Titov. A comparison is performed for the model of the stochastic medium in the form of a cloudy field of constant thickness comprising rectangular clouds whose boundaries are determined by a stationary Poisson flow of points. It is demonstrated that the difference between the calculated results can reach 20-30%; however, in some cases (for some sets of initial data) the difference is limited by 5% irrespective of the cloud cover index.
NASA Astrophysics Data System (ADS)
Morales-Casique, E.; Briseño-Ruiz, J. V.; Hernández, A. F.; Herrera, G. S.; Escolero-Fuentes, O.
2014-12-01
We present a comparison of three stochastic approaches for estimating log hydraulic conductivity (Y) and predicting steady-state groundwater flow. Two of the approaches are based on the data assimilation technique known as ensemble Kalman filter (EnKF) and differ in the way prior statistical moment estimates (PSME) (required to build the Kalman gain matrix) are obtained. In the first approach, the Monte Carlo method is employed to compute PSME of the variables and parameters; we denote this approach by EnKFMC. In the second approach PSME are computed through the direct solution of approximate nonlocal (integrodifferential) equations that govern the spatial conditional ensemble means (statistical expectations) and covariances of hydraulic head (h) and fluxes; we denote this approach by EnKFME. The third approach consists of geostatistical stochastic inversion of the same nonlocal moment equations; we denote this approach by IME. In addition to testing the EnKFMC and EnKFME methods in the traditional manner that estimate Y over the entire grid, we propose novel corresponding algorithms that estimate Y at a few selected locations and then interpolate over all grid elements via kriging as done in the IME method. We tested these methods to estimate Y and h in steady-state groundwater flow in a synthetic two-dimensional domain with a well pumping at a constant rate, located at the center of the domain. In addition, to evaluate the performance of the estimation methods, we generated four unconditional different realizations that served as "true" fields. The results of our numerical experiments indicate that the three methods were effective in estimating h, reaching at least 80% of predictive coverage, although both EnKF were superior to the IME method. With respect to estimating Y, the three methods reached similar accuracy in terms of the mean absolute value error. Coupling the EnKF methods with kriging to estimate Y reduces to one fourth the CPU time required for data
NASA Astrophysics Data System (ADS)
Maxwell, Reed M.; Kollet, Stefan J.
2008-05-01
The impact of three-dimensional subsurface heterogeneity in the saturated hydraulic conductivity on hillslope runoff generated by excess infiltration (so-called Hortonian runoff) is examined. A fully coupled, parallel subsurface-overland flow model is used to simulate runoff from an idealized hillslope. Ensembles of correlated, Gaussian random fields of saturated hydraulic conductivity are used to create uncertainty in spatial structure. A large number of cases are simulated in a parametric manner with the variance of the hydraulic conductivity varied over orders of magnitude. These cases include rainfall rates above, equal and below the geometric mean of the hydraulic conductivity distribution. These cases are also compared to theoretical representations of runoff production based on simple assumptions regarding (1) the rainfall rate and the value of hydraulic conductivity in the surface cell using a spatially-indiscriminant approach; and (2) a percolation-theory type approach to incorporate so-called runon. Simulations to test the ergodicity of hydraulic conductivity on hillslope runoff are also performed. Results show that three-dimensional stochastic representations of the subsurface hydraulic conductivity can create shallow perching, which has an important effect on runoff behavior that is different than previous two-dimensional analyses. The simple theories are shown to be very poor predictors of the fraction of saturated area that might runoff due to excess infiltration. It is also shown that ergodicity is reached only for a large number of integral scales (˜30) and not achieved for cases where the rainfall rate is less than the geometric mean of the saturated hydraulic conductivity.