Science.gov

Sample records for stochastic approach based

  1. Solving Stochastic Flexible Flow Shop Scheduling Problems with a Decomposition-Based Approach

    NASA Astrophysics Data System (ADS)

    Wang, K.; Choi, S. H.

    2010-06-01

    Real manufacturing is dynamic and tends to suffer a lot of uncertainties. Research on production scheduling under uncertainty has recently received much attention. Although various approaches have been developed for scheduling under uncertainty, this problem is still difficult to tackle by any single approach, because of its inherent difficulties. This chapter describes a decomposition-based approach (DBA) for makespan minimisation of a flexible flow shop (FFS) scheduling problem with stochastic processing times. The DBA decomposes an FFS into several machine clusters which can be solved more easily by different approaches. A neighbouring K-means clustering algorithm is developed to firstly group the machines of an FFS into an appropriate number of machine clusters, based on a weighted cluster validity index. A back propagation network (BPN) is then adopted to assign either the Shortest Processing Time (SPT) Algorithm or the Genetic Algorithm (GA) to generate a sub-schedule for each machine cluster. After machine grouping and approach assignment, an overall schedule is generated by integrating the sub-schedules of the machine clusters. Computation results reveal that the DBA is superior to SPT and GA alone for FFS scheduling under stochastic processing times, and that it can be easily adapted to schedule FFS under other uncertainties.

  2. Biochemical simulations: stochastic, approximate stochastic and hybrid approaches

    PubMed Central

    2009-01-01

    Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097

  3. A biologically motivated signal transmission approach based on stochastic delay differential equation

    NASA Astrophysics Data System (ADS)

    Xu, Mingdong; Wu, Fan; Leung, Henry

    2009-09-01

    Based on the stochastic delay differential equation (SDDE) modeling of neural networks, we propose an effective signal transmission approach along the neurons in such a network. Utilizing the linear relationship between the delay time and the variance of the SDDE system output, the transmitting side encodes a message as a modulation of the delay time and the receiving end decodes the message by tracking the delay time, which is equivalent to estimating the variance of the received signal. This signal transmission approach turns out to follow the principle of the spread spectrum technique used in wireless and wireline wideband communications but in the analog domain rather than digital. We hope the proposed method might help to explain some activities in biological systems. The idea can further be extended to engineering applications. The error performance of the communication scheme is also evaluated here.

  4. Combining Particle Filters and Consistency-Based Approaches for Monitoring and Diagnosis of Stochastic Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel

    2004-01-01

    Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.

  5. Holistic irrigation water management approach based on stochastic soil water dynamics

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly

  6. Sensitivity of Base-Isolated Systems to Ground Motion Characteristics: A Stochastic Approach

    SciTech Connect

    Kaya, Yavuz; Safak, Erdal

    2008-07-08

    Base isolators dissipate energy through their nonlinear behavior when subjected to earthquake-induced loads. A widely used base isolation system for structures involves installing lead-rubber bearings (LRB) at the foundation level. The force-deformation behavior of LRB isolators can be modeled by a bilinear hysteretic model. This paper investigates the effects of ground motion characteristics on the response of bilinear hysteretic oscillators by using a stochastic approach. Ground shaking is characterized by its power spectral density function (PSDF), which includes corner frequency, seismic moment, moment magnitude, and site effects as its parameters. The PSDF of the oscillator response is calculated by using the equivalent-linearization techniques of random vibration theory for hysteretic nonlinear systems. Knowing the PSDF of the response, we can calculate the mean square and the expected maximum response spectra for a range of natural periods and ductility values. The results show that moment magnitude is a critical factor determining the response. Site effects do not seem to have a significant influence.

  7. Stochastic approach to equilibrium and nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Tomé, Tânia; de Oliveira, Mário J.

    2015-04-01

    We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.

  8. Stochastic approach to equilibrium and nonequilibrium thermodynamics.

    PubMed

    Tomé, Tânia; de Oliveira, Mário J

    2015-04-01

    We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions. PMID:25974471

  9. Stochastic Modeling based on Dictionary Approach for the Generation of Daily Precipitation Occurrences

    NASA Astrophysics Data System (ADS)

    Panu, U. S.; Ng, W.; Rasmussen, P. F.

    2009-12-01

    The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in

  10. A risk-based interactive multi-stage stochastic programming approach for water resources planning under dual uncertainties

    NASA Astrophysics Data System (ADS)

    Wang, Y. Y.; Huang, G. H.; Wang, S.; Li, W.; Guan, P. B.

    2016-08-01

    In this study, a risk-based interactive multi-stage stochastic programming (RIMSP) approach is proposed through incorporating the fractile criterion method and chance-constrained programming within a multi-stage decision-making framework. RIMSP is able to deal with dual uncertainties expressed as random boundary intervals that exist in the objective function and constraints. Moreover, RIMSP is capable of reflecting dynamics of uncertainties, as well as the trade-off between the total net benefit and the associated risk. A water allocation problem is used to illustrate applicability of the proposed methodology. A set of decision alternatives with different combinations of risk levels applied to the objective function and constraints can be generated for planning the water resources allocation system. The results can help decision makers examine potential interactions between risks related to the stochastic objective function and constraints. Furthermore, a number of solutions can be obtained under different water policy scenarios, which are useful for decision makers to formulate an appropriate policy under uncertainty. The performance of RIMSP is analyzed and compared with an inexact multi-stage stochastic programming (IMSP) method. Results of comparison experiment indicate that RIMSP is able to provide more robust water management alternatives with less system risks in comparison with IMSP.

  11. Spectral-spatial classification of hyperspectral data based on a stochastic minimum spanning forest approach.

    PubMed

    Bernard, Kévin; Tarabalka, Yuliya; Angulo, Jesús; Chanussot, Jocelyn; Benediktsson, Jón Atli

    2012-04-01

    In this paper, a new method for supervised hyperspectral data classification is proposed. In particular, the notion of stochastic minimum spanning forest (MSF) is introduced. For a given hyperspectral image, a pixelwise classification is first performed. From this classification map, M marker maps are generated by randomly selecting pixels and labeling them as markers for the construction of MSFs. The next step consists in building an MSF from each of the M marker maps. Finally, all the M realizations are aggregated with a maximum vote decision rule in order to build the final classification map. The proposed method is tested on three different data sets of hyperspectral airborne images with different resolutions and contexts. The influences of the number of markers and of the number of realizations M on the results are investigated in experiments. The performance of the proposed method is compared to several classification techniques (both pixelwise and spectral-spatial) using standard quantitative criteria and visual qualitative evaluation. PMID:22086502

  12. Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system.

    PubMed

    Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng

    2016-01-01

    Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction. PMID:26940002

  13. Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system

    PubMed Central

    Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng

    2016-01-01

    Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction. PMID:26940002

  14. Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system

    NASA Astrophysics Data System (ADS)

    Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng

    2016-03-01

    Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction.

  15. A stochastic simulation-optimization approach for estimating highly reliable soil tension threshold values in sensor-based deficit irrigation

    NASA Astrophysics Data System (ADS)

    Kloss, S.; Schütze, N.; Walser, S.; Grundmann, J.

    2012-04-01

    In arid and semi-arid regions where water is scarce, farmers heavily rely on irrigation in order to grow crops and to produce agricultural commodities. The variable and often severely limited water supply thereby poses a serious challenge for farmers to cope with and demand sophisticated irrigation strategies that allow an efficient management of the available water resources. The general aim is to increase water productivity (WP) and one of these strategies to achieve this goal is controlled deficit irrigation (CDI). One way to realize CDI is by defining soil water status specific threshold values (either in soil tension or moisture) at which irrigation cycles are triggered. When utilizing CDI, irrigation control is of utmost importance and yet thresholds are likely chosen by trial and error and thus unreliable. Hence, for CDI to be effective systematic investigations for deriving reliable threshold values that account for different CDI strategies are needed. In this contribution, a method is presented that uses a simulation-based stochastic approach for estimating threshold values with a high reliability. The approach consist of a weather generator offering statistical significance to site-specific climate series, an optimization algorithm that determines optimal threshold values under limiting waters supply, and a crop model for simulating plant growth and water consumption. The study focuses on threshold values of soil tension for different CDI strategies. The advantage of soil-tension-based threshold values over soil-moisture-based lies in their universal and soil type independent applicability. The investigated CDI strategies comprised schedules of constant threshold values, crop development stage dependent threshold values, and different minimum irrigation intervals. For practical reasons, fixed irrigation schedules were tested as well. Additionally, a full irrigation schedule served as reference. The obtained threshold values were then tested in field

  16. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography

    PubMed Central

    2013-01-01

    Background The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. Methods In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young’s modulus are computed simultaneously through an extended Kalman filter (EKF). Results In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young’s modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. Conclusions The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model

  17. Stochastic resonance: A residence time approach

    SciTech Connect

    Gammaitoni, L. |; Marchesoni, F. |; Menichella Saetta, E.; Santucci, S.

    1996-06-01

    The Stochastic Resonance phenomenon is described as a synchronization process between periodic signals and the random response in bistable systems. The residence time approach as a useful tool in characterizing hidden periodicities is discussed. {copyright} {ital 1996 American Institute of Physics.}

  18. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353

  19. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  20. Variational approach to stochastic nonlinear problems

    SciTech Connect

    Phythian, R.; Curtis, W.D.

    1986-03-01

    A variational principle is formulated which enables the mean value and higher moments of the solution of a stochastic nonlinear differential equation to be expressed as stationary values of certain quantities. Approximations are generated by using suitable trial functions in this variational principle and some of these are investigated numerically for the case of a Bernoulli oscillator driven by white noise. Comparison with exact data available for this system show that the variational approach to such problems can be quite effective.

  1. Fuzzy stochastic elements method. Spectral approach

    NASA Astrophysics Data System (ADS)

    Sniady, Pawel; Mazur-Sniady, Krystyna; Sieniawska, Roza; Zukowski, Stanislaw

    2013-05-01

    We study a complex dynamic problem, which concerns a structure with uncertain parameters subjected to a stochastic excitation. Formulation of such a problem introduces fuzzy random variables for parameters of the structure and fuzzy stochastic processes for the load process. The uncertainty has two sources, namely the randomness of structural parameters such as geometry characteristics, material and damping properties, load process and imprecision of the theoretical model and incomplete information or uncertain data. All of these have a great influence on the response of the structure. By analyzing such problems we describe the random variability using the probability theory and the imprecision by use of fuzzy sets. Due to the fact that it is difficult to find an analytic expression for the inversion of the stochastic operator in the stochastic differential equation, a number of approximate methods have been proposed in the literature which can be connected to the finite element method. To evaluate the effects of excitation in the frequency domain we use the spectral density function. The spectral analysis is widely used in stochastic dynamics field of linear systems for stationary random excitation. The concept of the evolutionary spectral density is used in the case of non-stationary random excitation. We solve the considered problem using fuzzy stochastic finite element method. The solution is based on the idea of a fuzzy random frequency response vector for stationary input excitation and a transient fuzzy random frequency response vector for the fuzzy non-stationary one. We use the fuzzy random frequency response vector and the transient fuzzy random frequency response vector in the context of spectral analysis in order to determine the influence of structural uncertainty on the fuzzy random response of the structure. We study a linear system with random parameters subjected to two particular cases of stochastic excitation in a frequency domain. The first one

  2. Permutation approach to finite-alphabet stationary stochastic processes based on the duality between values and orderings

    NASA Astrophysics Data System (ADS)

    Haruna, T.; Nakajima, K.

    2013-06-01

    The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.

  3. A stochastic approach to robust broadband structural control

    NASA Technical Reports Server (NTRS)

    Macmartin, Douglas G.; Hall, Steven R.

    1992-01-01

    Viewgraphs on a stochastic approach to robust broadband structural control are presented. Topics covered include: travelling wave model; dereverberated mobility model; computation of dereverberated mobility; power flow; impedance matching; stochastic systems; control problem; control of stochastic systems; using cost functional; Bernoulli-Euler beam example; compensator design; 'power' dual variables; dereverberation of complex structure; and dereverberated transfer function.

  4. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. PMID:24290145

  5. Multiscale stochastic approach for phase screens synthesis.

    PubMed

    Beghi, Alessandro; Cenedese, Angelo; Masiero, Andrea

    2011-07-20

    Simulating the turbulence effect on ground telescope observations is of fundamental importance for the design and test of suitable control algorithms for adaptive optics systems. In this paper we propose a multiscale approach for efficiently synthesizing turbulent phases at very high resolution. First, the turbulence is simulated at low resolution, taking advantage of a previously developed method for generating phase screens [J. Opt. Soc. Am. A 25, 515 (2008)]. Then, high-resolution phase screens are obtained as the output of a multiscale linear stochastic system. The multiscale approach significantly improves the computational efficiency of turbulence simulation with respect to recently developed methods [Opt. Express 14, 988 (2006)] [J. Opt. Soc. Am. A 25, 515 (2008)] [J. Opt. Soc. Am. A 25, 463 (2008)]. Furthermore, the proposed procedure ensures good accuracy in reproducing the statistical characteristics of the turbulent phase. PMID:21772400

  6. Optimality of collective choices: a stochastic approach.

    PubMed

    Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L

    2003-09-01

    Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251

  7. Theoretical approach on microscopic bases of stochastic functional self-organization: quantitative measures of the organizational degree of the environment

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel Adrian

    2001-11-01

    There has been increased theoretical and experimental research interest in autonomous mobile robots exhibiting cooperative behaviour. This paper provides consistent quantitative measures of organizational degree of a two-dimensional environment. We proved, by the way of numerical simulations, that the theoretically derived values of the feature are reliable measures of aggregation degree. The slope of the feature's dependence on memory radius leads to an optimization criterion for stochastic functional self-organization. We also described the intellectual heritages that have guided our research, as well as possible future developments.

  8. Stochastic realization approach to the efficient simulation of phase screens.

    PubMed

    Beghi, Alessandro; Cenedese, Angelo; Masiero, Andrea

    2008-02-01

    The phase screen method is a well-established approach to take into account the effects of atmospheric turbulence in astronomical seeing. This is of key importance in designing adaptive optics for new-generation telescopes, in particular in view of applications such as exoplanet detection or long-exposure spectroscopy. We present an innovative approach to simulate turbulent phase that is based on stochastic realization theory. The method shows appealing properties in terms of both accuracy in reconstructing the structure function and compactness of the representation. PMID:18246185

  9. Geometrically consistent approach to stochastic DBI inflation

    SciTech Connect

    Lorenz, Larissa; Martin, Jerome; Yokoyama, Jun'ichi

    2010-07-15

    Stochastic effects during inflation can be addressed by averaging the quantum inflaton field over Hubble-patch-sized domains. The averaged field then obeys a Langevin-type equation into which short-scale fluctuations enter as a noise term. We solve the Langevin equation for an inflaton field with a Dirac-Born-Infeld (DBI) kinetic term perturbatively in the noise and use the result to determine the field value's probability density function (PDF). In this calculation, both the shape of the potential and the warp factor are arbitrary functions, and the PDF is obtained with and without volume effects due to the finite size of the averaging domain. DBI kinetic terms typically arise in string-inspired inflationary scenarios in which the scalar field is associated with some distance within the (compact) extra dimensions. The inflaton's accessible range of field values therefore is limited because of the extra dimensions' finite size. We argue that in a consistent stochastic approach the inflaton's PDF must vanish for geometrically forbidden field values. We propose to implement these extra-dimensional spatial restrictions into the PDF by installing absorbing (or reflecting) walls at the respective boundaries in field space. As a toy model, we consider a DBI inflaton between two absorbing walls and use the method of images to determine its most general PDF. The resulting PDF is studied in detail for the example of a quartic warp factor and a chaotic inflaton potential. The presence of the walls is shown to affect the inflaton trajectory for a given set of parameters.

  10. Stochastic approach to flat direction during inflation

    SciTech Connect

    Kawasaki, Masahiro; Takesako, Tomohiro E-mail: takesako@icrr.u-tokyo.ac.jp

    2012-08-01

    We revisit the time evolution of a flat and non-flat direction system during inflation. In order to take into account quantum noises in the analysis, we base on stochastic formalism and solve coupled Langevin equations numerically. We focus on a class of models in which tree-level Hubble-induced mass is not generated. Although the non-flat directions can block the growth of the flat direction's variance in principle, the blocking effects are suppressed by the effective masses of the non-flat directions. We find that the fate of the flat direction during inflation is determined by one-loop radiative corrections and non-renormalizable terms as usually considered, if we remove the zero-point fluctuation from the noise terms.

  11. Symmetries of stochastic differential equations: A geometric approach

    NASA Astrophysics Data System (ADS)

    De Vecchi, Francesco C.; Morando, Paola; Ugolini, Stefania

    2016-06-01

    A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.

  12. Two Different Approaches to Nonzero-Sum Stochastic Differential Games

    SciTech Connect

    Rainer, Catherine

    2007-06-15

    We make the link between two approaches to Nash equilibria for nonzero-sum stochastic differential games: the first one using backward stochastic differential equations and the second one using strategies with delay. We prove that, when both exist, the two notions of Nash equilibria coincide.

  13. Future change of daily precipitation indices in Japan: A stochastic weather generator-based bootstrap approach to provide probabilistic climate information

    NASA Astrophysics Data System (ADS)

    Iizumi, Toshichika; Takayabu, Izuru; Dairaku, Koji; Kusaka, Hiroyuki; Nishimori, Motoki; Sakurai, Gen; Ishizaki, Noriko N.; Adachi, Sachiho A.; Semenov, Mikhail A.

    2012-06-01

    This study proposes the stochastic weather generator (WG)-based bootstrap approach to provide the probabilistic climate change information on mean precipitation as well as extremes, which applies a WG (i.e., LARS-WG) to daily precipitation under the present-day and future climate conditions derived from dynamical and statistical downscaling models. Additionally, the study intercompares the precipitation change scenarios derived from the multimodel ensemble for Japan focusing on five precipitation indices (mean precipitation, MEA; number of wet days, FRE; mean precipitation amount per wet day, INT; maximum number of consecutive dry days, CDD; and 90th percentile value of daily precipitation amount in wet days, Q90). Three regional climate models (RCMs: NHRCM, NRAMS and TWRF) are nested into the high-resolution atmosphere-ocean coupled general circulation model (MIROC3.2HI AOGCM) for A1B emission scenario. LARS-WG is validated and used to generate 2000 years of daily precipitation from sets of grid-specific parameters derived from the 20-year simulations from the RCMs and statistical downscaling model (SDM: CDFDM). Then 100 samples of the 20-year of continuous precipitation series are resampled, and mean values of precipitation indices are computed, which represents the randomness inherent in daily precipitation data. Based on these samples, the probabilities of change in the indices and the joint occurrence probability of extremes (CDD and Q90) are computed. High probabilities are found for the increases in heavy precipitation amount in spring and summer and elongated consecutive dry days in winter over Japan in the period 2081-2100, relative to 1981-2000. The joint probability increases in most areas throughout the year, suggesting higher potential risk of droughts and excess water-related disasters (e.g., floods) in a 20 year period in the future. The proposed approach offers more flexible way in estimating probabilities of multiple types of precipitation extremes

  14. Three-dimensional stochastic estimation of porosity distribution: Benefits of using ground-penetrating radar velocity tomograms in simulated-annealing-based or Bayesian sequential simulation approaches

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.

    2012-05-01

    Estimation of the three-dimensional (3-D) distribution of hydrologic properties and related uncertainty is a key for improved predictions of hydrologic processes in the subsurface. However it is difficult to gain high-quality and high-density hydrologic information from the subsurface. In this regard a promising strategy is to use high-resolution geophysical data (that are relatively sensitive to variations of a hydrologic parameter of interest) to supplement direct hydrologic information from measurements in wells (e.g., logs, vertical profiles) and then generate stochastic simulations of the distribution of the hydrologic property conditioned on the hydrologic and geophysical data. In this study we develop and apply this strategy for a 3-D field experiment in the heterogeneous aquifer at the Boise Hydrogeophysical Research Site and we evaluate how much benefit the geophysical data provide. We run high-resolution 3-D conditional simulations of porosity with both simulated-annealing-based and Bayesian sequential approaches using information from multiple intersecting crosshole gound-penetrating radar (GPR) velocity tomograms and neutron porosity logs. The benefit of using GPR data is assessed by investigating their ability, when included in conditional simulation, to predict porosity log data withheld from the simulation. Results show that the use of crosshole GPR data can significantly improve the estimation of porosity spatial distribution and reduce associated uncertainty compared to using only well log measurements for the estimation. The amount of benefit depends primarily on the strength of the petrophysical relation between the GPR and porosity data, the variability of this relation throughout the investigated site, and lateral structural continuity at the site.

  15. Evaluating the role of soil variability on groundwater pollution and recharge at regional scale by integrating a process-based vadose zone model in a stochastic approach

    NASA Astrophysics Data System (ADS)

    Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi

    2013-04-01

    modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by

  16. A fast numerical approach to option pricing with stochastic interest rate, stochastic volatility and double jumps

    NASA Astrophysics Data System (ADS)

    Zhang, Sumei; Wang, Lihe

    2013-07-01

    This study proposes a pricing model through allowing for stochastic interest rate and stochastic volatility in the double exponential jump-diffusion setting. The characteristic function of the proposed model is then derived. Fast numerical solutions for European call and put options pricing based on characteristic function and fast Fourier transform (FFT) technique are developed. Simulations show that our numerical technique is accurate, fast and easy to implement, the proposed model is suitable for modeling long-time real-market changes. The model and the proposed option pricing method are useful for empirical analysis of asset returns and risk management in firms.

  17. Stochastic approach to the molecular counting problem in superresolution microscopy

    PubMed Central

    Rollins, Geoffrey C.; Shin, Jae Yen; Bustamante, Carlos; Pressé, Steve

    2015-01-01

    Superresolution imaging methods—now widely used to characterize biological structures below the diffraction limit—are poised to reveal in quantitative detail the stoichiometry of protein complexes in living cells. In practice, the photophysical properties of the fluorophores used as tags in superresolution methods have posed a severe theoretical challenge toward achieving this goal. Here we develop a stochastic approach to enumerate fluorophores in a diffraction-limited area measured by superresolution microscopy. The method is a generalization of aggregated Markov methods developed in the ion channel literature for studying gating dynamics. We show that the method accurately and precisely enumerates fluorophores in simulated data while simultaneously determining the kinetic rates that govern the stochastic photophysics of the fluorophores to improve the prediction’s accuracy. This stochastic method overcomes several critical limitations of temporal thresholding methods. PMID:25535361

  18. Langevin equation approach to reactor noise analysis: stochastic transport equation

    SciTech Connect

    Akcasu, A.Z. ); Stolle, A.M. )

    1993-01-01

    The application of the Langevin equation method to the study of fluctuations in the space- and velocity-dependent neutron density as well as in the detector outputs in nuclear reactors is presented. In this case, the Langevin equation is the stochastic linear neutron transport equation with a space- and velocity-dependent random neutron source, often referred to as the noise equivalent source (NES). The power spectral densities (PSDs) of the NESs in the transport equation, as well as in the accompanying detection rate equations, are obtained, and the cross- and auto-power spectral densities of the outputs of pairs of detectors are explicitly calculated. The transport-level expression for the R([omega]) ratio measured in the [sup 252]Cf source-driven noise analysis method is also derived. Finally, the implementation of the Langevin equation approach at different levels of approximation is discussed, and the stochastic one-speed transport and one-group P[sub 1] equations are derived by first integrating the stochastic transport equation over speed and then eliminating the angular dependence by a spherical harmonics expansion. By taking the large transport rate limit in the P[sub 1] description, the stochastic diffusion equation is obtained as well as the PSD of the NES in it. This procedure also leads directly to the stochastic Fick's law.

  19. Evaluating the role of soil variability on groundwater pollution and recharge at regional scale by integrating a process-based vadose zone model in a stochastic approach

    NASA Astrophysics Data System (ADS)

    Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi

    2013-04-01

    modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by

  20. Computational approaches to stochastic systems in physics and biology

    NASA Astrophysics Data System (ADS)

    Jeraldo Maldonado, Patricio Rodrigo

    In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the

  1. A probabilistic graphical model based stochastic input model construction

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2014-09-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.

  2. Unstable infinite nuclear matter in stochastic mean field approach

    SciTech Connect

    Colonna, M.; Chomaz, P. Laboratorio Nazionale del Sud, Viale Andrea Doria, Catania )

    1994-04-01

    In this article, we consider a semiclassical stochastic mean-field approach. In the case of unstable infinite nuclear matter, we calculate the characteristic time of the exponential growing of fluctuations and the diffusion coefficients associated to the unstable modes, in the framework of the Boltzmann-Langevin theory. These two quantities are essential to describe the dynamics of fluctuations and instabilities since, in the unstable regions, the evolution of the system will be dominated by the amplification of fluctuations. In order to make realistic 3D calculations feasible, we suggest to replace the complicated Boltzmann-Langevin theory by a simpler stochastic mean-field approach corresponding to a standard Boltzmann evolution, complemented by a simple noise chosen to reproduce the dynamics of the most unstable modes. Finally we explain how to approximately implement this method by simply tuning the noise associated to the use of a finite number of test particles in Boltzman-like calculations.

  3. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    SciTech Connect

    Ma, Xiao; Dong, Jin; Djouadi, Seddik M; Nutaro, James J; Kuruganti, Teja

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  4. Martingale Approach to Stochastic Control with Discretionary Stopping

    SciTech Connect

    Karatzas, Ioannis Zamfirescu, Ingrid-Mona

    2006-03-15

    We develop a martingale approach for continuous-time stochastic control with discretionary stopping. The relevant Dynamic Programming Equation and Maximum Principle are presented. Necessary and sufficient conditions are provided for the optimality of a control strategy; these are analogues of the 'equalization' and 'thriftiness' conditions introduced by Dubins and Savage (1976) in a related, discrete-time context. The existence of a thrifty control strategy is established.

  5. Implications of a stochastic approach to air-quality regulations

    SciTech Connect

    Witten, A.J.; Kornegay, F.C.; Hunsaker, D.B. Jr.; Long, E.C. Jr.; Sharp, R.D.; Walsh, P.J.; Zeighami, E.A.; Gordon, J.S.; Lin, W.L.

    1982-09-01

    This study explores the viability of a stochastic approach to air quality regulations. The stochastic approach considered here is one which incorporates the variability which exists in sulfur dioxide (SO/sub 2/) emissions from coal-fired power plants. Emission variability arises from a combination of many factors including variability in the composition of as-received coal such as sulfur content, moisture content, ash content, and heating value, as well as variability which is introduced in power plant operations. The stochastic approach as conceived in this study addresses variability by taking the SO/sub 2/ emission rate to be a random variable with specified statistics. Given the statistical description of the emission rate and known meteorological conditions, it is possible to predict the probability of a facility exceeding a specified emission limit or violating an established air quality standard. This study also investigates the implications of accounting for emissions variability by allowing compliance to be interpreted as an allowable probability of occurrence of given events. For example, compliance with an emission limit could be defined as the probability of exceeding a specified emission value, such as 1.2 lbs SO/sub 2//MMBtu, being less than 1%. In contrast, compliance is currently taken to mean that this limit shall never be exceeded, i.e., no exceedance probability is allowed. The focus of this study is on the economic benefits offered to facilities through the greater flexibility of the stochastic approach as compared with possible changes in air quality and health effects which could result.

  6. Stochastic model updating utilizing Bayesian approach and Gaussian process model

    NASA Astrophysics Data System (ADS)

    Wan, Hua-Ping; Ren, Wei-Xin

    2016-03-01

    Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.

  7. A Spatial Clustering Approach for Stochastic Fracture Network Modelling

    NASA Astrophysics Data System (ADS)

    Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.

    2014-07-01

    Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach

  8. A new approach to the group analysis of one-dimensional stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Abdullin, M. A.; Meleshko, S. V.; Nasyrov, F. S.

    2014-03-01

    Stochastic evolution equations are investigated using a new approach to the group analysis of stochastic differential equations. It is shown that the proposed approach reduces the problem of group analysis for this type of equations to the same problem of group analysis for evolution equations of special form without stochastic integrals.

  9. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical

  10. GIS-based approach for optimal siting and sizing of renewables considering techno-environmental constraints and the stochastic nature of meteorological inputs

    NASA Astrophysics Data System (ADS)

    Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2016-04-01

    Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.

  11. Barkhausen discontinuities and hysteresis of ferromagnetics: New stochastic approach

    SciTech Connect

    Vengrinovich, Valeriy

    2014-02-18

    The magnetization of ferromagnetic material is considered as periodically inhomogeneous Markov process. The theory assumes both statistically independent and correlated Barkhausen discontinuities. The model, based on the chain evolution-type process theory, assumes that the domain structure of a ferromagnet passes successively the steps of: linear growing, exponential acceleration and domains annihilation to zero density at magnetic saturation. The solution of stochastic differential Kolmogorov equation enables the hysteresis loop calculus.

  12. A Stochastic Differential Equation Approach To Multiphase Flow In Porous Media

    NASA Astrophysics Data System (ADS)

    Dean, D.; Russell, T.

    2003-12-01

    The motivation for using stochastic differential equations in multiphase flow systems stems from our work in developing an upscaling methodology for single phase flow. The long term goals of this project include: I. Extending this work to a nonlinear upscaling methodology II. Developing a macro-scale stochastic theory of multiphase flow and transport that accounts for micro-scale heterogeneities and interfaces. In this talk, we present a stochastic differential equation approach to multiphase flow, a typical example of which is flow in the unsaturated domain. Specifically, a two phase problem is studied which consists of a wetting phase and a non-wetting phase. The approach given results in a nonlinear stochastic differential equation describing the position of the non-wetting phase fluid particle. Our fundamental assumption is that the flow of fluid particles is described by a stochastic process and that the positions of the fluid particles over time are governed by the law of the process. It is this law which we seek to determine. The nonlinearity in the stochastic differential equation arises because both the drift and diffusion coefficients depend on the volumetric fraction of the phase which in turn depends on the position of the fluid particles in the experimental domain. The concept of a fluid particle is central to the development of the model described in this talk. Expressions for both saturation and volumetric fraction are developed using the fluid particle concept. Darcy's law and the continuity equation are then used to derive a Fokker-Planck equation using these expressions. The Ito calculus is then applied to derive a stochastic differential equation for the non-wetting phase. This equation has both drift and diffusion terms which depend on the volumetric fraction of the non-wetting phase. Standard stochastic theories based on the Ito calculus and the Wiener process and the equivalent Fokker-Planck PDE's are typically used to model dispersion

  13. Approaching complexity by stochastic methods: From biological systems to turbulence

    NASA Astrophysics Data System (ADS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-09-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  14. Stochastic uncertainty analysis for solute transport in randomly heterogeneous media using a Karhunen-Loève-based moment equation approach

    USGS Publications Warehouse

    Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao

    2007-01-01

    A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen-Loève-based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen-Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two-dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.

  15. Stochastic thermodynamics of reactive systems: An extended local equilibrium approach

    NASA Astrophysics Data System (ADS)

    De Decker, Yannick; Derivaux, Jean-François; Nicolis, Grégoire

    2016-04-01

    The recently developed extended local equilibrium approach to stochastic thermodynamics is applied to reactive systems. The properties of the fluctuating entropy and entropy production are analyzed for general linear and for prototypical nonlinear kinetic processes. It is shown that nonlinear kinetics typically induces deviations of the mean entropy production from its value in the deterministic (mean-field) limit. The probability distributions around the mean are derived and shown to qualitatively differ in thermodynamic equilibrium, under nonequilibrium conditions and in the vicinity of criticalities associated to the onset of multistability. In each case large deviation-type properties are shown to hold. The results are compared with those of alternative approaches developed in the literature.

  16. Stochastic control approaches for sensor management in search and exploitation

    NASA Astrophysics Data System (ADS)

    Hitchings, Darin Chester

    new lower bound on the performance of adaptive controllers in these scenarios, develop algorithms for computing solutions to this lower bound, and use these algorithms as part of a RH controller for sensor allocation in the presence of moving objects We also consider an adaptive Search problem where sensing actions are continuous and the underlying measurement space is also continuous. We extend our previous hierarchical decomposition approach based on performance bounds to this problem and develop novel implementations of Stochastic Dynamic Programming (SDP) techniques to solve this problem. Our algorithms are nearly two orders of magnitude faster than previously proposed approaches and yield solutions of comparable quality. For supervisory control, we discuss how human operators can work with and augment robotic teams performing these tasks. Our focus is on how tasks are partitioned among teams of robots and how a human operator can make intelligent decisions for task partitioning. We explore these questions through the design of a game that involves robot automata controlled by our algorithms and a human supervisor that partitions tasks based on different levels of support information. This game can be used with human subject experiments to explore the effect of information on quality of supervisory control.

  17. Majorana approach to the stochastic theory of line shapes

    NASA Astrophysics Data System (ADS)

    Komijani, Yashar; Coleman, Piers

    2016-08-01

    Motivated by recent Mössbauer experiments on strongly correlated mixed-valence systems, we revisit the Kubo-Anderson stochastic theory of spectral line shapes. Using a Majorana representation for the nuclear spin we demonstrate how to recast the classic line-shape theory in a field-theoretic and diagrammatic language. We show that the leading contribution to the self-energy can reproduce most of the observed line-shape features including splitting and line-shape narrowing, while the vertex and the self-consistency corrections can be systematically included in the calculation. This approach permits us to predict the line shape produced by an arbitrary bulk charge fluctuation spectrum providing a model-independent way to extract the local charge fluctuation spectrum of the surrounding medium. We also derive an inverse formula to extract the charge fluctuation from the measured line shape.

  18. ENISI SDE: A New Web-Based Tool for Modeling Stochastic Processes.

    PubMed

    Mei, Yongguo; Carbo, Adria; Hoops, Stefan; Hontecillas, Raquel; Bassaganya-Riera, Josep

    2015-01-01

    Modeling and simulations approaches have been widely used in computational biology, mathematics, bioinformatics and engineering to represent complex existing knowledge and to effectively generate novel hypotheses. While deterministic modeling strategies are widely used in computational biology, stochastic modeling techniques are not as popular due to a lack of user-friendly tools. This paper presents ENISI SDE, a novel web-based modeling tool with stochastic differential equations. ENISI SDE provides user-friendly web user interfaces to facilitate adoption by immunologists and computational biologists. This work provides three major contributions: (1) discussion of SDE as a generic approach for stochastic modeling in computational biology; (2) development of ENISI SDE, a web-based user-friendly SDE modeling tool that highly resembles regular ODE-based modeling; (3) applying ENISI SDE modeling tool through a use case for studying stochastic sources of cell heterogeneity in the context of CD4+ T cell differentiation. The CD4+ T cell differential ODE model has been published [8] and can be downloaded from biomodels.net. The case study reproduces a biological phenomenon that is not captured by the previously published ODE model and shows the effectiveness of SDE as a stochastic modeling approach in biology in general and immunology in particular and the power of ENISI SDE. PMID:26357217

  19. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    NASA Astrophysics Data System (ADS)

    Samin, Adib J.

    2016-05-01

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  20. Text Classification Using ESC-Based Stochastic Decision Lists.

    ERIC Educational Resources Information Center

    Li, Hang; Yamanishi, Kenji

    2002-01-01

    Proposes a new method of text classification using stochastic decision lists, ordered sequences of IF-THEN-ELSE rules. The method can be viewed as a rule-based method for text classification having advantages of readability and refinability of acquired knowledge. Advantages of rule-based methods over non-rule-based ones are empirically verified.…

  1. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    USGS Publications Warehouse

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured

  2. A stochastic analysis of steady and transient heat conduction in random media using a homogenization approach

    SciTech Connect

    Zhijie Xu

    2014-07-01

    We present a new stochastic analysis for steady and transient one-dimensional heat conduction problem based on the homogenization approach. Thermal conductivity is assumed to be a random field K consisting of random variables of a total number N. Both steady and transient solutions T are expressed in terms of the homogenized solution (symbol) and its spatial derivatives (equation), where homogenized solution (symbol) is obtained by solving the homogenized equation with effective thermal conductivity. Both mean and variance of stochastic solutions can be obtained analytically for K field consisting of independent identically distributed (i.i.d) random variables. The mean and variance of T are shown to be dependent only on the mean and variance of these i.i.d variables, not the particular form of probability distribution function of i.i.d variables. Variance of temperature field T can be separated into two contributions: the ensemble contribution (through the homogenized temperature (symbol)); and the configurational contribution (through the random variable Ln(x)Ln(x)). The configurational contribution is shown to be proportional to the local gradient of (symbol). Large uncertainty of T field was found at locations with large gradient of (symbol) due to the significant configurational contributions at these locations. Numerical simulations were implemented based on a direct Monte Carlo method and good agreement is obtained between numerical Monte Carlo results and the proposed stochastic analysis.

  3. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  4. Mode-of-Action Uncertainty for Dual-Mode Carcinogens: A Bounding Approach for Naphthalene-Induced Nasal Tumors in Rats Based on PBPK and 2-Stage Stochastic Cancer Risk Models

    SciTech Connect

    Bogen, K T

    2007-05-11

    A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage

  5. Stochastic Coloured Petrinet Based Healthcare Infrastructure Interdependency Model

    NASA Astrophysics Data System (ADS)

    Nukavarapu, Nivedita; Durbha, Surya

    2016-06-01

    The Healthcare Critical Infrastructure (HCI) protects all sectors of the society from hazards such as terrorism, infectious disease outbreaks, and natural disasters. HCI plays a significant role in response and recovery across all other sectors in the event of a natural or manmade disaster. However, for its continuity of operations and service delivery HCI is dependent on other interdependent Critical Infrastructures (CI) such as Communications, Electric Supply, Emergency Services, Transportation Systems, and Water Supply System. During a mass casualty due to disasters such as floods, a major challenge that arises for the HCI is to respond to the crisis in a timely manner in an uncertain and variable environment. To address this issue the HCI should be disaster prepared, by fully understanding the complexities and interdependencies that exist in a hospital, emergency department or emergency response event. Modelling and simulation of a disaster scenario with these complexities would help in training and providing an opportunity for all the stakeholders to work together in a coordinated response to a disaster. The paper would present interdependencies related to HCI based on Stochastic Coloured Petri Nets (SCPN) modelling and simulation approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The entire model would be integrated with Geographic information based decision support system to visualize the dynamic behaviour of the interdependency of the Healthcare and related CI network in a geographically based environment.

  6. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  7. Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.

    NASA Astrophysics Data System (ADS)

    Carlen, Eric Anders

    In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.

  8. An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments

    NASA Astrophysics Data System (ADS)

    Kang, Yuncheol; Prabhu, Vittal

    The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.

  9. A stochastic modeling methodology based on weighted Wiener chaos and Malliavin calculus

    PubMed Central

    Wan, Xiaoliang; Rozovskii, Boris; Karniadakis, George Em

    2009-01-01

    In many stochastic partial differential equations (SPDEs) involving random coefficients, modeling the randomness by spatial white noise may lead to ill-posed problems. Here we consider an elliptic problem with spatial Gaussian coefficients and present a methodology that resolves this issue. It is based on stochastic convolution implemented via generalized Malliavin operators in conjunction with weighted Wiener spaces that ensure the ellipticity condition. We present theoretical and numerical results that demonstrate the fast convergence of the method in the proper norm. Our approach is general and can be extended to other SPDEs and other types of multiplicative noise. PMID:19666498

  10. Stochastic approach to municipal solid waste landfill life based on the contaminant transit time modeling using the Monte Carlo (MC) simulation.

    PubMed

    Bieda, Bogusław

    2013-01-01

    The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. PMID:23194922

  11. A new approach for the assessment of stochastic variation: analysis of behavioural response in blue mussel ( Mytilus edulis L.)

    NASA Astrophysics Data System (ADS)

    Lajus, D. L.; Sukhotin, A. A.

    1998-06-01

    One of the most effective techniques for evaluating stress is the analysis of developmental stability, measured by stochastic variation based particularly on fluctuating asymmetry, i.e. a variance in random deviations from perfect bilateral symmetry. However, the application of morphological methods is only possible when an organism lives under testing conditions during a significant part of its ontogenesis. Contrary to morphological characters, behavior can change very fast. Consequently, methods based on behavioural characters may have advantages over more traditional approaches. In this study we describe the technique of assessing stochastic variation, using not morphological, but behavioural characters. To measure stochastic variation of behavioural response, we assessed the stability of the isolation reaction of blue mussel Mytilus edulis at regular changes of salinity. With increasing temperature from +12°C to +20°C stochastic variation of the isolation reaction increased, which is a common response to change of environmental conditions. In this way, we have developed a method of assessing stochastic variation of behavioural response in molluscs. This method may find a great range of applications, because its usage does not require keeping animals in tested conditions for a long time.

  12. Revisiting the Cape Cod Bacteria Injection Experiment Using a Stochastic Modeling Approach

    SciTech Connect

    Maxwell, R M; Welty, C; Harvey, R W

    2006-11-22

    Bromide and resting-cell bacteria tracer tests carried out in a sand and gravel aquifer at the USGS Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach and Lagrangian particle tracking numerical methods. Bacteria transport was strongly coupled to colloid filtration through functional dependence of local-scale colloid transport parameters on hydraulic conductivity and seepage velocity in a stochastic advection-dispersion/attachment-detachment model. Information on geostatistical characterization of the hydraulic conductivity (K) field from a nearby plot was utilized as input that was unavailable when the original analysis was carried out. A finite difference model for groundwater flow and a particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data using the aforementioned geostatistical parameters. An optimization routine was utilized to adjust the mean and variance of the lnK field over 100 realizations such that a best fit of a simulated, average bromide breakthrough curve is achieved. Once the optimal bromide fit was accomplished (based on adjusting the lnK statistical parameters in unconditional simulations), a stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of the mean bacteria breakthrough data were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech [1] equation for estimating single collector efficiency were compared to those using the Rajagopalan and Tien [2] model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions, with the Rajagopalan and Tien model yielding approximately a 30% lower peak concentration and less tailing than the Tufenkji and Elimelech formulation. Simulations using a distribution

  13. Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach

    ERIC Educational Resources Information Center

    Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro

    2005-01-01

    Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…

  14. Mapping Rule-Based And Stochastic Constraints To Connection Architectures: Implication For Hierarchical Image Processing

    NASA Astrophysics Data System (ADS)

    Miller, Michael I.; Roysam, Badrinath; Smith, Kurt R.

    1988-10-01

    Essential to the solution of ill posed problems in vision and image processing is the need to use object constraints in the reconstruction. While Bayesian methods have shown the greatest promise, a fundamental difficulty has persisted in that many of the available constraints are in the form of deterministic rules rather than as probability distributions and are thus not readily incorporated as Bayesian priors. In this paper, we propose a general method for mapping a large class of rule-based constraints to their equivalent stochastic Gibbs' distribution representation. This mapping allows us to solve stochastic estimation problems over rule-generated constraint spaces within a Bayesian framework. As part of this approach we derive a method based on Langevin's stochastic differential equation and a regularization technique based on the classical autologistic transfer function that allows us to update every site simultaneously regardless of the neighbourhood structure. This allows us to implement a completely parallel method for generating the constraint sets corresponding to the regular grammar languages on massively parallel networks. We illustrate these ideas by formulating the image reconstruction problem based on a hierarchy of rule-based and stochastic constraints, and derive a fully parallelestimator structure. We also present results computed on the AMT DAP500 massively parallel digital computer, a mesh-connected 32x32 array of processing elements which are configured in a Single-Instruction, Multiple Data stream architecture.

  15. Evolving Stochastic Learning Algorithm based on Tsallis entropic index

    NASA Astrophysics Data System (ADS)

    Anastasiadis, A. D.; Magoulas, G. D.

    2006-03-01

    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.

  16. Wavelet-expansion-based stochastic response of chain-like MDOF structures

    NASA Astrophysics Data System (ADS)

    Kong, Fan; Li, Jie

    2015-12-01

    This paper presents a wavelet-expansion-based approach for response determination of a chain-like multi-degree-of-freedom (MDOF) structure subject to full non-stationary stochastic excitations. Specifically, the generalized harmonic wavelet (GHW) is first utilized as the expansion basis to solve the dynamic equation of structures via the Galerkin treatment. In this way, a linear matrix relationship between the deterministic response and excitation can be derived. Further, considering the GHW-based representation of the stochastic processes, a time-varying power spectrum density (PSD) relationship on a certain wavelet scale or frequency band between the excitation and response is derived. Finally, pertinent numerical simulations, including deterministic dynamic analysis and Monte Carlo simulations of both the response PSD and the story-drift-based reliability, are utilized to validate the proposed approach.

  17. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    SciTech Connect

    Heydari, M.H.; Hooshmandasl, M.R.; Maalek Ghaini, F.M.; Cattani, C.

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show the accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.

  18. A comparison of deterministic and stochastic approaches for regional scale inverse modeling on the Mar del Plata aquifer

    NASA Astrophysics Data System (ADS)

    Pool, M.; Carrera, J.; Alcolea, A.; Bocanegra, E. M.

    2015-12-01

    Inversion of the spatial variability of transmissivity (T) in groundwater models can be handled using either stochastic or deterministic (i.e., geology-based zonation) approaches. While stochastic methods predominate in scientific literature, they have never been formally compared to deterministic approaches, preferred by practitioners, for regional aquifer models. We use both approaches to model groundwater flow and solute transport in the Mar del Plata aquifer, where seawater intrusion is a major threat to freshwater resources. The relative performance of the two approaches is evaluated in terms of (i) model fits to head and concentration data (available for nearly a century), (ii) geological plausibility of the estimated T fields, and (iii) their ability to predict transport. We also address the impact of conditioning the estimated fields on T data coming from either pumping tests interpreted with the Theis method or specific capacity values from step-drawdown tests. We find that stochastic models, based upon conditional estimation and simulation techniques, identify some of the geological features (river deposit channels and low transmissivity regions associated to quartzite outcrops) and yield better fits to calibration data than the much simpler geology-based deterministic model, which cannot properly address model structure uncertainty. However, the latter demonstrates much greater robustness for predicting sea water intrusion and for incorporating concentrations as calibration data. We attribute the poor performance, and underestimated uncertainty, of the stochastic simulations to estimation bias introduced by model errors. Qualitative geological information is extremely rich in identifying large-scale variability patterns, which are identified by stochastic models only in data rich areas, and should be explicitly included in the calibration process.

  19. Exploring stochasticity and imprecise knowledge based on linear inequality constraints.

    PubMed

    Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf

    2016-09-01

    This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge. PMID:26746217

  20. Inversion of Robin coefficient by a spectral stochastic finite element approach

    SciTech Connect

    Jin Bangti Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  1. Condition-dependent mate choice: A stochastic dynamic programming approach.

    PubMed

    Frame, Alicia M; Mills, Alex F

    2014-09-01

    We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. PMID:24996205

  2. Robust synthetic biology design: stochastic game theory approach

    PubMed Central

    Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching

    2009-01-01

    Motivation: Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. Results: A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi–Sugeno (T–S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. Availability: http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf Contact: bschen@ee.nthu.edu.tw Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19435742

  3. A markov model based analysis of stochastic biochemical systems.

    PubMed

    Ghosh, Preetam; Ghosh, Samik; Basu, Kalyan; Das, Sajial K

    2007-01-01

    The molecular networks regulating basic physiological processes in a cell are generally converted into rate equations assuming the number of biochemical molecules as deterministic variables. At steady state these rate equations gives a set of differential equations that are solved using numerical methods. However, the stochastic cellular environment motivates us to propose a mathematical framework for analyzing such biochemical molecular networks. The stochastic simulators that solve a system of differential equations includes this stochasticity in the model, but suffer from simulation stiffness and require huge computational overheads. This paper describes a new markov chain based model to simulate such complex biological systems with reduced computation and memory overheads. The central idea is to transform the continuous domain chemical master equation (CME) based method into a discrete domain of molecular states with corresponding state transition probabilities and times. Our methodology allows the basic optimization schemes devised for the CME and can also be extended to reduce the computational and memory overheads appreciably at the cost of accuracy. The simulation results for the standard Enzyme-Kinetics and Transcriptional Regulatory systems show promising correspondence with the CME based methods and point to the efficacy of our scheme. PMID:17951818

  4. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    NASA Astrophysics Data System (ADS)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  5. A stochastic approach to the hadron spectrum. III

    SciTech Connect

    Aron, J.C.

    1986-12-01

    The connection with the quarks of the stochastic model proposed in the two preceding papers is studied; the slopes of the baryon trajectories are calculated with reference to the quarks. Suggestions are made for the interpretation of the model (quadratic or linear addition of the contributions to the mass, dependence of the decay on the quantum numbers of the hadrons involved, etc.) and concerning its link with the quarkonium model, which describes the mesons with charm or beauty. The controversial question of the ''subquantum level'' is examined.

  6. A fast and scalable recurrent neural network based on stochastic meta descent.

    PubMed

    Liu, Zhenzhen; Elhanany, Itamar

    2008-09-01

    This brief presents an efficient and scalable online learning algorithm for recurrent neural networks (RNNs). The approach is based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated with either its input or output links. This yields a reduced storage and computational complexity of O(N(2)). Stochastic meta descent (SMD), an adaptive step size scheme for stochastic gradient-descent problems, is employed as means of incorporating curvature information in order to substantially accelerate the learning process. We also introduce a clustered version of our algorithm to further improve its scalability attributes. Despite the dramatic reduction in resource requirements, it is shown through simulation results that the approach outperforms regular RTRL by almost an order of magnitude. Moreover, the scheme lends itself to parallel hardware realization by virtue of the localized property that is inherent to the learning framework. PMID:18779096

  7. Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Cartwright, Keigh

    2014-10-01

    To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  8. Integral-based event triggering controller design for stochastic LTI systems via convex optimisation

    NASA Astrophysics Data System (ADS)

    Mousavi, S. H.; Marquez, H. J.

    2016-07-01

    The presence of measurement noise in the event-based systems can lower system efficiency both in terms of data exchange rate and performance. In this paper, an integral-based event triggering control system is proposed for LTI systems with stochastic measurement noise. We show that the new mechanism is robust against noise and effectively reduces the flow of communication between plant and controller, and also improves output performance. Using a Lyapunov approach, stability in the mean square sense is proved. A simulated example illustrates the properties of our approach.

  9. Time-Frequency Approach for Stochastic Signal Detection

    SciTech Connect

    Ghosh, Ripul; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2011-10-20

    The detection of events in a stochastic signal has been a subject of great interest. One of the oldest signal processing technique, Fourier Transform of a signal contains information regarding frequency content, but it cannot resolve the exact onset of changes in the frequency, all temporal information is contained in the phase of the transform. On the other hand, Spectrogram is better able to resolve temporal evolution of frequency content, but has a trade-off in time resolution versus frequency resolution in accordance with the uncertainty principle. Therefore, time-frequency representations are considered for energetic characterisation of the non-stationary signals. Wigner Ville Distribution (WVD) is the most prominent quadratic time-frequency signal representation and used for analysing frequency variations in signals.WVD allows for instantaneous frequency estimation at each data point, for a typical temporal resolution of fractions of a second. This paper through simulations describes the way time frequency models are applied for the detection of event in a stochastic signal.

  10. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  11. A stochastic control approach to Slotted-ALOHA random access protocol

    NASA Astrophysics Data System (ADS)

    Pietrabissa, Antonio

    2013-12-01

    ALOHA random access protocols are distributed protocols based on transmission probabilities, that is, each node decides upon packet transmissions according to a transmission probability value. In the literature, ALOHA protocols are analysed by giving necessary and sufficient conditions for the stability of the queues of the node buffers under a control vector (whose elements are the transmission probabilities assigned to the nodes), given an arrival rate vector (whose elements represent the rates of the packets arriving in the node buffers). The innovation of this work is that, given an arrival rate vector, it computes the optimal control vector by defining and solving a stochastic control problem aimed at maximising the overall transmission efficiency, while keeping a grade of fairness among the nodes. Furthermore, a more general case in which the arrival rate vector changes in time is considered. The increased efficiency of the proposed solution with respect to the standard ALOHA approach is evaluated by means of numerical simulations.

  12. Nuclear quadrupole resonance lineshape analysis for different motional models: Stochastic Liouville approach

    NASA Astrophysics Data System (ADS)

    Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.

    2011-12-01

    A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.

  13. Stochastic Modeling Approach to the Incubation Time of Prionic Diseases

    NASA Astrophysics Data System (ADS)

    Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.

    2003-05-01

    Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.

  14. Stochastic queueing-theory approach to human dynamics

    NASA Astrophysics Data System (ADS)

    Walraevens, Joris; Demoor, Thomas; Maertens, Tom; Bruneel, Herwig

    2012-02-01

    Recently, numerous studies have shown that human dynamics cannot be described accurately by exponential laws. For instance, Barabási [Nature (London)NATUAS0028-083610.1038/nature03459 435, 207 (2005)] demonstrates that waiting times of tasks to be performed by a human are more suitably modeled by power laws. He presumes that these power laws are caused by a priority selection mechanism among the tasks. Priority models are well-developed in queueing theory (e.g., for telecommunication applications), and this paper demonstrates the (quasi-)immediate applicability of such a stochastic priority model to human dynamics. By calculating generating functions and by studying them in their dominant singularity, we prove that nonexponential tails result naturally. Contrary to popular belief, however, these are not necessarily triggered by the priority selection mechanism.

  15. Two-state approach to stochastic hair bundle dynamics

    NASA Astrophysics Data System (ADS)

    Clausznitzer, Diana; Lindner, Benjamin; Jülicher, Frank; Martin, Pascal

    2008-04-01

    Hair cells perform the mechanoelectrical transduction of sound signals in the auditory and vestibular systems of vertebrates. The part of the hair cell essential for this transduction is the so-called hair bundle. In vitro experiments on hair cells from the sacculus of the American bullfrog have shown that the hair bundle comprises active elements capable of producing periodic deflections like a relaxation oscillator. Recently, a continuous nonlinear stochastic model of the hair bundle motion [Nadrowski , Proc. Natl. Acad. Sci. U.S.A. 101, 12195 (2004)] has been shown to reproduce the experimental data in stochastic simulations faithfully. Here, we demonstrate that a binary filtering of the hair bundle's deflection (experimental data and continuous hair bundle model) does not change significantly the spectral statistics of the spontaneous as well as the periodically driven hair bundle motion. We map the continuous hair bundle model to the FitzHugh-Nagumo model of neural excitability and discuss the bifurcations between different regimes of the system in terms of the latter model. Linearizing the nullclines and assuming perfect time-scale separation between the variables we can map the FitzHugh-Nagumo system to a simple two-state model in which each of the states corresponds to the two possible values of the binary-filtered hair bundle trajectory. For the two-state model, analytical expressions for the power spectrum and the susceptibility can be calculated [Lindner and Schimansky-Geier, Phys. Rev. E 61, 6103 (2000)] and show the same features as seen in the experimental data as well as in simulations of the continuous hair bundle model.

  16. Distinguishing chaotic and stochastic dynamics from time series by using a multiscale symbolic approach.

    PubMed

    Zunino, L; Soriano, M C; Rosso, O A

    2012-10-01

    In this paper we introduce a multiscale symbolic information-theory approach for discriminating nonlinear deterministic and stochastic dynamics from time series associated with complex systems. More precisely, we show that the multiscale complexity-entropy causality plane is a useful representation space to identify the range of scales at which deterministic or noisy behaviors dominate the system's dynamics. Numerical simulations obtained from the well-known and widely used Mackey-Glass oscillator operating in a high-dimensional chaotic regime were used as test beds. The effect of an increased amount of observational white noise was carefully examined. The results obtained were contrasted with those derived from correlated stochastic processes and continuous stochastic limit cycles. Finally, several experimental and natural time series were analyzed in order to show the applicability of this scale-dependent symbolic approach in practical situations. PMID:23214666

  17. Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays.

    PubMed

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2014-10-01

    We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models. PMID:25296793

  18. Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays

    SciTech Connect

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2014-10-07

    We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.

  19. Wavelet-Variance-Based Estimation for Composite Stochastic Processes.

    PubMed

    Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia

    2013-09-01

    This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689

  20. Wavelet-Variance-Based Estimation for Composite Stochastic Processes

    PubMed Central

    Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia

    2013-01-01

    This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689

  1. Stochastic multiscale modelling of cortical bone elasticity based on high-resolution imaging.

    PubMed

    Sansalone, Vittorio; Gagliardi, Davide; Desceliers, Christophe; Bousson, Valérie; Laredo, Jean-Denis; Peyrin, Françoise; Haïat, Guillaume; Naili, Salah

    2016-02-01

    Accurate and reliable assessment of bone quality requires predictive methods which could probe bone microstructure and provide information on bone mechanical properties. Multiscale modelling and simulation represent a fast and powerful way to predict bone mechanical properties based on experimental information on bone microstructure as obtained through X-ray-based methods. However, technical limitations of experimental devices used to inspect bone microstructure may produce blurry data, especially in in vivo conditions. Uncertainties affecting the experimental data (input) may question the reliability of the results predicted by the model (output). Since input data are uncertain, deterministic approaches are limited and new modelling paradigms are required. In this paper, a novel stochastic multiscale model is developed to estimate the elastic properties of bone while taking into account uncertainties on bone composition. Effective elastic properties of cortical bone tissue were computed using a multiscale model based on continuum micromechanics. Volume fractions of bone components (collagen, mineral, and water) were considered as random variables whose probabilistic description was built using the maximum entropy principle. The relevance of this approach was proved by analysing a human bone sample taken from the inferior femoral neck. The sample was imaged using synchrotron radiation micro-computed tomography. 3-D distributions of Haversian porosity and tissue mineral density extracted from these images supplied the experimental information needed to build the stochastic models of the volume fractions. Thus, the stochastic multiscale model provided reliable statistical information (such as mean values and confidence intervals) on bone elastic properties at the tissue scale. Moreover, the existence of a simpler "nominal model", accounting for the main features of the stochastic model, was investigated. It was shown that such a model does exist, and its relevance

  2. Broadband seismic monitoring of active volcanoes using deterministic and stochastic approaches

    NASA Astrophysics Data System (ADS)

    Kumagai, H.; Nakano, M.; Maeda, T.; Yepes, H.; Palacios, P.; Ruiz, M. C.; Arrais, S.; Vaca, M.; Molina, I.; Yamashina, T.

    2009-12-01

    We systematically used two approaches to analyze broadband seismic signals observed at active volcanoes: one is waveform inversion of very-long-period (VLP) signals in the frequency domain assuming possible source mechanisms; the other is a source location method of long-period (LP) and tremor using their amplitudes. The deterministic approach of the waveform inversion is useful to constrain the source mechanism and location, but is basically only applicable to VLP signals with periods longer than a few seconds. The source location method uses seismic amplitudes corrected for site amplifications and assumes isotropic radiation of S waves. This assumption of isotropic radiation is apparently inconsistent with the hypothesis of crack geometry at the LP source. Using the source location method, we estimated the best-fit source location of a VLP/LP event at Cotopaxi using a frequency band of 7-12 Hz and Q = 60. This location was close to the best-fit source location determined by waveform inversion of the VLP/LP event using a VLP band of 5-12.5 s. The waveform inversion indicated that a crack mechanism better explained the VLP signals than an isotropic mechanism. These results indicated that isotropic radiation is not inherent to the source and only appears at high frequencies. We also obtained a best-fit location of an explosion event at Tungurahua when using a frequency band of 5-10 Hz and Q = 60. This frequency band and Q value also yielded reasonable locations for the sources of tremor signals associated with lahars and pyroclastic flows at Tungurahua. The isotropic radiation assumption may be valid in a high frequency range in which the path effect caused by the scattering of seismic waves results in an isotropic radiation pattern of S waves. The source location method may be categorized as a stochastic approach based on the nature of scattering waves. We further applied the waveform inversion to VLP signals observed at only two stations during a volcanic crisis

  3. A stochastic optimization approach for integrated urban water resource planning.

    PubMed

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change. PMID:23552255

  4. A Stochastic Approach to Noise Modeling for Barometric Altimeters

    PubMed Central

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2013-01-01

    The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions. PMID:24253189

  5. A stochastic process approach of the drake equation parameters

    NASA Astrophysics Data System (ADS)

    Glade, Nicolas; Ballet, Pascal; Bastien, Olivier

    2012-04-01

    The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.

  6. Coalescence avalanches in 2D emulsions: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Masila, Danny Raj; Rengaswamy, Raghunathan

    2015-11-01

    One coalescence event in a 2D concentrated emulsion can trigger an avalanche resulting in the rapid destabilization of the drop-assembly. The sensitive dependence of this phenomenon on various factors that include surfactant concentration and viscosities of the fluid phases makes the avalanching problem appear probabilistic. We propose a stochastic framework- that utilizes a probability function to explain local coalescence events- to study the dynamics of the coalescence avalanches. A function that accounts for the local coalescence mechanism is used to fit the experimentally (from literature) measured probability data. A continuation parameter is introduced along with this function to account for the effect of system properties on the avalanche dynamics. Our analysis reveals that this behavior is a result of the inherent autocatalytic nature of the process. We discover that the avalanche dynamics shows critical behavior where two outcomes are favored: no avalanche and large avalanches that lead to destabilization. We study the effect of system size and fluid properties on the avalanche dynamics. A sharp transition from non-autocatalytic (stable emulsions) to autocatalytic (unstable) behavior is observed as parameters are varied.

  7. Anticipating the Chaotic Behaviour of Industrial Systems Based on Stochastic, Event-Driven Simulations

    NASA Astrophysics Data System (ADS)

    Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra

    2004-08-01

    In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.

  8. Parameter-induced stochastic resonance based on spectral entropy and its application to weak signal detection

    SciTech Connect

    Zhang, Jinjing; Zhang, Tao

    2015-02-15

    The parameter-induced stochastic resonance based on spectral entropy (PSRSE) method is introduced for the detection of a very weak signal in the presence of strong noise. The effect of stochastic resonance on the detection is optimized using parameters obtained in spectral entropy analysis. Upon processing employing the PSRSE method, the amplitude of the weak signal is enhanced and the noise power is reduced, so that the frequency of the signal can be estimated with greater precision through spectral analysis. While the improvement in the signal-to-noise ratio is similar to that obtained using the Duffing oscillator algorithm, the computational cost reduces from O(N{sup 2}) to O(N). The PSRSE approach is applied to the frequency measurement of a weak signal made by a vortex flow meter. The results are compared with those obtained applying the Duffing oscillator algorithm.

  9. Stochastically optimized monocular vision-based navigation and guidance

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoko

    -effort guidance (MEG) law for multiple target tracking is applied for a guidance design to achieve the mission. Through simulations, it is shown that the control effort can be reduced by using the MEG-based guidance design instead of a conventional proportional navigation-based one. The navigation and guidance designs are implemented and evaluated in a 6 DoF UAV flight simulation. Furthermore, the vision-based obstacle avoidance system is also tested in a flight test using a balloon as an obstacle. For monocular vision-based control problems, it is well-known that the separation principle between estimation and control does not hold. In other words, that vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Unlike many other works on observer trajectory optimization, this thesis suggests a stochastically optimized guidance design that minimizes the expected value of a cost function of the guidance error and the control effort subject to the EKF prediction and update procedures. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization, in which the optimization is performed under the assumption that there will be only one more final measurement at the one time step ahead. The OSA suboptimal guidance law is applied to problems of vision-based rendezvous and vision-based obstacle avoidance. Simulation results are presented to show that the suggested guidance law significantly improves the guidance performance. The OSA suboptimal optimization approach is generalized as the n-step-ahead (nSA) optimization for an arbitrary number of n. Furthermore, the nSA suboptimal guidance law is extended to the p %-ahead suboptimal guidance by changing the value of n at each time step depending on the current time. The nSA (including the OSA) and

  10. Wildfire susceptibility mapping: comparing deterministic and stochastic approaches

    NASA Astrophysics Data System (ADS)

    Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj

    2016-04-01

    Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.

  11. A stochastic approach for simulating spatially inhomogeneous coagulation dynamics in the gelation regime

    NASA Astrophysics Data System (ADS)

    Guiaş, Flavius

    2009-01-01

    We present a stochastic approach for the simulation of coagulation-diffusion dynamics in the gelation regime. The method couples the mass flow algorithm for coagulation processes with a stochastic variant of the diffusion-velocity method in a discretized framework. The simulation of the stochastic processes occurs according to an optimized implementation of the principle of grouping the possible events. A full simulation of a particle system driven by coagulation-diffusion dynamics is performed with a high degree of accuracy. This allows a qualitative and quantitative analysis of the behaviour of the system. The performance of the method becomes more evident especially in the gelation regime, where the computations become usually very time consuming.

  12. Image-based histologic grade estimation using stochastic geometry analysis

    NASA Astrophysics Data System (ADS)

    Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.

    2011-03-01

    Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.

  13. Reliability-based design optimization under stationary stochastic process loads

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Du, Xiaoping

    2016-08-01

    Time-dependent reliability-based design ensures the satisfaction of reliability requirements for a given period of time, but with a high computational cost. This work improves the computational efficiency by extending the sequential optimization and reliability analysis (SORA) method to time-dependent problems with both stationary stochastic process loads and random variables. The challenge of the extension is the identification of the most probable point (MPP) associated with time-dependent reliability targets. Since a direct relationship between the MPP and reliability target does not exist, this work defines the concept of equivalent MPP, which is identified by the extreme value analysis and the inverse saddlepoint approximation. With the equivalent MPP, the time-dependent reliability-based design optimization is decomposed into two decoupled loops: deterministic design optimization and reliability analysis, and both are performed sequentially. Two numerical examples are used to show the efficiency of the proposed method.

  14. Stochastic analysis of bounded unsaturated flow in heterogeneous aquifers: Spectral/perturbation approach

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2009-01-01

    This paper describes a stochastic analysis of steady state flow in a bounded, partially saturated heterogeneous porous medium subject to distributed infiltration. The presence of boundary conditions leads to non-uniformity in the mean unsaturated flow, which in turn causes non-stationarity in the statistics of velocity fields. Motivated by this, our aim is to investigate the impact of boundary conditions on the behavior of field-scale unsaturated flow. Within the framework of spectral theory based on Fourier-Stieltjes representations for the perturbed quantities, the general expressions for the pressure head variance, variance of log unsaturated hydraulic conductivity and variance of the specific discharge are presented in the wave number domain. Closed-form expressions are developed for the simplified case of statistical isotropy of the log hydraulic conductivity field with a constant soil pore-size distribution parameter. These expressions allow us to investigate the impact of the boundary conditions, namely the vertical infiltration from the soil surface and a prescribed pressure head at a certain depth below the soil surface. It is found that the boundary conditions are critical in predicting uncertainty in bounded unsaturated flow. Our analytical expression for the pressure head variance in a one-dimensional, heterogeneous flow domain, developed using a nonstationary spectral representation approach [Li S-G, McLaughlin D. A nonstationary spectral method for solving stochastic groundwater problems: unconditional analysis. Water Resour Res 1991;27(7):1589-605; Li S-G, McLaughlin D. Using the nonstationary spectral method to analyze flow through heterogeneous trending media. Water Resour Res 1995; 31(3):541-51], is precisely equivalent to the published result of Lu et al. [Lu Z, Zhang D. Analytical solutions to steady state unsaturated flow in layered, randomly heterogeneous soils via Kirchhoff transformation. Adv Water Resour 2004;27:775-84].

  15. Linking agent-based models and stochastic models of financial markets.

    PubMed

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086

  16. Linking agent-based models and stochastic models of financial markets

    PubMed Central

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene

    2012-01-01

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086

  17. HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Brownston, Lee

    2012-01-01

    Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The

  18. A stochastic approach to uncertainty quantification in residual moveout analysis

    NASA Astrophysics Data System (ADS)

    Johng-Ay, T.; Landa, E.; Dossou-Gbété, S.; Bordes, L.

    2015-06-01

    Oil and gas exploration and production relies usually on the interpretation of a single seismic image, which is obtained from observed data. However, the statistical nature of seismic data and the various approximations and assumptions are sources of uncertainties which may corrupt the evaluation of parameters. The quantification of these uncertainties is a major issue which supposes to help in decisions that have important social and commercial implications. The residual moveout analysis, which is an important step in seismic data processing is usually performed by a deterministic approach. In this paper we discuss a Bayesian approach to the uncertainty analysis.

  19. Ultrafast dynamics of finite Hubbard clusters: A stochastic mean-field approach

    NASA Astrophysics Data System (ADS)

    Lacroix, Denis; Hermanns, S.; Hinz, C. M.; Bonitz, M.

    2014-09-01

    Finite lattice models are a prototype for interacting quantum systems and capture essential properties of condensed matter systems. With the dramatic progress in ultracold atoms in optical lattices, finite fermionic Hubbard systems have become directly accessible in experiments, including their ultrafast dynamics far from equilibrium. Here, we present a theoretical approach that is able to treat these dynamics in any dimension and fully includes inhomogeneity effects. The method consists in stochastic sampling of mean-field trajectories and is—for not too large two-body interaction strength—found to be much more accurate than time-dependent mean-field at the same order of numerical costs. Furthermore, it can well compete with recent nonequilibrium Green function approaches using second-order Born approximation, which are of substantially larger complexity. The performance of the stochastic mean-field approach is demonstrated for Hubbard clusters with up to 512 particles in one, two, and three dimensions.

  20. A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments

    PubMed Central

    Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro

    2009-01-01

    Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided. PMID:22412334

  1. Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model

    NASA Astrophysics Data System (ADS)

    Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon

    Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.

  2. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357

  3. On the Performance of Stochastic Model-Based Image Segmentation

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Sewchand, Wilfred

    1989-11-01

    A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].

  4. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    NASA Astrophysics Data System (ADS)

    Cotter, Simon L.; Zygalakis, Konstantinos C.; Kevrekidis, Ioannis G.; Erban, Radek

    2011-09-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems.

  5. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Junaid, Ali Khan; Muhammad, Asif Zahoor Raja; Ijaz Mansoor, Qureshi

    2011-02-01

    We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.

  6. A Stochastic Approach For Extending The Dimensionality Of Observed Datasets

    NASA Technical Reports Server (NTRS)

    Varnai, Tamas

    2002-01-01

    This paper addresses the problem that in many cases, observations cannot provide complete fields of the measured quantities, because they yield data only along a single cross-section through the examined fields. The paper describes a new Fourier-adjustment technique that allows existing fractal models to build realistic surroundings to the measured cross-sections. This new approach allows more representative calculations of cloud radiative processes and may be used in other areas as well.

  7. STOCHASTIC APPROACH FOR ASSESSING THE EFFECT OF CHANGES IN SYNOPTIC CIRCULATION PATTERNS ON GAUGE PRECIPITATION

    EPA Science Inventory

    A stochastic model is described that allows transfer of information from general circulation models to precipitation gauge locations using a weather state classification scheme. he weather states, which are based on present and previous day's sea level pressure, are related stoch...

  8. Advanced Computational Approaches for Characterizing Stochastic Cellular Responses to Low Dose, Low Dose Rate Exposures

    SciTech Connect

    Scott, Bobby, R., Ph.D.

    2003-06-27

    OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on

  9. A wavelet-based computational method for solving stochastic Itô–Volterra integral equations

    SciTech Connect

    Mohammadi, Fakhrodin

    2015-10-01

    This paper presents a computational method based on the Chebyshev wavelets for solving stochastic Itô–Volterra integral equations. First, a stochastic operational matrix for the Chebyshev wavelets is presented and a general procedure for forming this matrix is given. Then, the Chebyshev wavelets basis along with this stochastic operational matrix are applied for solving stochastic Itô–Volterra integral equations. Convergence and error analysis of the Chebyshev wavelets basis are investigated. To reveal the accuracy and efficiency of the proposed method some numerical examples are included.

  10. Deterministic and stochastic approaches in the clinical application of mesenchymal stromal cells (MSCs)

    PubMed Central

    Pacini, Simone

    2014-01-01

    Mesenchymal stromal cells (MSCs) have enormous intrinsic clinical value due to their multi-lineage differentiation capacity, support of hemopoiesis, immunoregulation and growth factors/cytokines secretion. MSCs have thus been the object of extensive research for decades. After completion of many pre-clinical and clinical trials, MSC-based therapy is now facing a challenging phase. Several clinical trials have reported moderate, non-durable benefits, which caused initial enthusiasm to wane, and indicated an urgent need to optimize the efficacy of therapeutic, platform-enhancing MSC-based treatment. Recent investigations suggest the presence of multiple in vivo MSC ancestors in a wide range of tissues, which contribute to the heterogeneity of the starting material for the expansion of MSCs. This variability in the MSC culture-initiating cell population, together with the different types of enrichment/isolation and cultivation protocols applied, are hampering progress in the definition of MSC-based therapies. International regulatory statements require a precise risk/benefit analysis, ensuring the safety and efficacy of treatments. GMP validation allows for quality certification, but the prediction of a clinical outcome after MSC-based therapy is correlated not only to the possible morbidity derived by cell production process, but also to the biology of the MSCs themselves, which is highly sensible to unpredictable fluctuation of isolating and culture conditions. Risk exposure and efficacy of MSC-based therapies should be evaluated by pre-clinical studies, but the batch-to-batch variability of the final medicinal product could significantly limit the predictability of these studies. The future success of MSC-based therapies could lie not only in rational optimization of therapeutic strategies, but also in a stochastic approach during the assessment of benefit and risk factors. PMID:25364757

  11. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    PubMed

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions. PMID:19268387

  12. A stochastic damage model for the rupture prediction of a multi-phase solid. I - Parametric studies. II - Statistical approach

    NASA Technical Reports Server (NTRS)

    Lua, Yuan J.; Liu, Wing K.; Belytschko, Ted

    1992-01-01

    A stochastic damage model for predicting the rupture of a brittle multiphase material is developed, based on the microcrack-macrocrack interaction. The model, which incorporates uncertainties in locations, orientations, and numbers of microcracks, characterizes damage by microcracking and fracture by macrocracking. A parametric study is carried out to investigate the change of the stress intensity at the macrocrack tip by the configuration of microcracks. The inherent statistical distribution of the fracture toughness arising from the intrinsic random nature of microcracks is explored using a statistical approach. For this purpose, a computer simulation model is introduced, which incorporates a statistical characterization of geometrical parameters of a random microcrack array.

  13. Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.

    PubMed

    Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A

    2007-12-01

    By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises. PMID:18233821

  14. Comparing stochastic differential equations and agent-based modelling and simulation for early-stage cancer.

    PubMed

    Figueredo, Grazziela P; Siebers, Peer-Olaf; Owen, Markus R; Reps, Jenna; Aickelin, Uwe

    2014-01-01

    There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm. PMID:24752131

  15. On a stochastic approach to a code performance estimation

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.

    2016-06-01

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  16. Robustness and security assessment of image watermarking techniques by a stochastic approach

    NASA Astrophysics Data System (ADS)

    Conotter, V.; Boato, G.; Fontanari, C.; De Natale, F. G. B.

    2009-02-01

    In this paper we propose to evaluate both robustness and security of digital image watermarking techniques by considering the perceptual quality of un-marked images in terms of Weightened PSNR. The proposed tool is based on genetic algorithms and is suitable for researchers to evaluate robustness performances of developed watermarking methods. Given a combination of selected attacks, the proposed framework looks for a fine parameterization of them ensuring a perceptual quality of the un-marked image lower than a given threshold. Correspondingly, a novel metric for robustness assessment is introduced. On the other hand, this tool results to be useful also in those scenarios where an attacker tries to remove the watermark to overcome copyright issues. Security assessment is provided by a stochastic search of the minimum degradation that needs to be introduced in order to obtain an un-marked version of the image as close as possible to the given one. Experimental results show the effectiveness of the proposed approach.

  17. Runoff modelling using radar data and flow measurements in a stochastic state space approach.

    PubMed

    Krämer, S; Grum, M; Verworn, H R; Redder, A

    2005-01-01

    In urban drainage the estimation of runoff with the help of models is a complex task. This is in part due to the fact that rainfall, the most important input to urban drainage modelling, is highly uncertain. Added to the uncertainty of rainfall is the complexity of performing accurate flow measurements. In terms of deterministic modelling techniques these are needed for calibration and evaluation of the applied model. Therefore, the uncertainties of rainfall and flow measurements have a severe impact on the model parameters and results. To overcome these problems a new methodology has been developed which is based on simple rain plane and runoff models that are incorporated into a stochastic state space model approach. The state estimation is done by using the extended Kalman filter in combination with a maximum likelihood criterion and an off-line optimization routine. This paper presents the results of this new methodology with respect to the combined consideration of uncertainties in distributed rainfall derived from radar data and uncertainties in measured flows in an urban catchment within the Emscher river basin, Germany. PMID:16248174

  18. All-loop calculations of total, elastic and single diffractive cross sections in RFT via the stochastic approach

    SciTech Connect

    Kolevatov, R. S.; Boreskov, K. G.

    2013-04-15

    We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.

  19. Relative frequencies of constrained events in stochastic processes: An analytical approach

    NASA Astrophysics Data System (ADS)

    Rusconi, S.; Akhmatskaya, E.; Sokolovski, D.; Ballard, N.; de la Cal, J. C.

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈104 ). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  20. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications. PMID:26565363

  1. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  2. Multi-choice stochastic bi-level programming problem in cooperative nature via fuzzy programming approach

    NASA Astrophysics Data System (ADS)

    Maiti, Sumit Kumar; Roy, Sankar Kumar

    2016-05-01

    In this paper, a Multi-Choice Stochastic Bi-Level Programming Problem (MCSBLPP) is considered where all the parameters of constraints are followed by normal distribution. The cost coefficients of the objective functions are multi-choice types. At first, all the probabilistic constraints are transformed into deterministic constraints using stochastic programming approach. Further, a general transformation technique with the help of binary variables is used to transform the multi-choice type cost coefficients of the objective functions of Decision Makers(DMs). Then the transformed problem is considered as a deterministic multi-choice bi-level programming problem. Finally, a numerical example is presented to illustrate the usefulness of the paper.

  3. Non-perturbative approach for curvature perturbations in stochastic δ N formalism

    SciTech Connect

    Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro E-mail: kawasaki@icrr.u-tokyo.ac.jp

    2014-10-01

    In our previous paper [1], we have proposed a new algorithm to calculate the power spectrum of the curvature perturbations generated in inflationary universe with use of the stochastic approach. Since this algorithm does not need the perturbative expansion with respect to the inflaton fields on super-horizon scale, it works even in highly stochastic cases. For example, when the curvature perturbations are very large or the non-Gaussianities of the curvature perturbations are sizable, the perturbative expansion may break down but our algorithm enables to calculate the curvature perturbations. We apply it to two well-known inflation models, chaotic and hybrid inflation, in this paper. Especially for hybrid inflation, while the potential is very flat around the critical point and the standard perturbative computation is problematic, we successfully calculate the curvature perturbations.

  4. Stochastic Inversion of Electrical Resistivity Changes Using a Markov Chain, Monte Carlo Approach

    SciTech Connect

    Ramirez, A; Nitao, J; Hanley, W; Aines, R; Glaser, R; Sengupta, S; Dyer, K; Hickling, T; Daily, W

    2004-09-21

    We describe a stochastic inversion method for mapping subsurface regions where the electrical resistivity is changing. The technique combines prior information, electrical resistance data and forward models to produce subsurface resistivity models that are most consistent with all available data. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. Attractive features include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate and, (2) allow alternative model estimates to be identified, compared and ranked. Methods that monitor convergence and summarize important trends of the posterior distribution are introduced. Results from a physical model test and a field experiment were used to assess performance. The stochastic inversions presented provide useful estimates of the most probable location, shape, and volume of the changing region, and the most likely resistivity change. The proposed method is computationally expensive, requiring the use of extensive computational resources to make its application practical.

  5. Stochastic path integral approach to continuous quadrature measurement of a single fluorescing qubit

    NASA Astrophysics Data System (ADS)

    Jordan, Andrew N.; Chantasri, Areeya; Huard, Benjamin

    I will present a theory of continuous quantum measurement for a superconducting qubit undergoing fluorescent energy relaxation. The fluorescence of the qubit is detected via a phase-preserving heterodyne measurement, giving the cavity mode quadrature signals as two continuous qubit readout results. By using the stochastic path integral approach to the measurement physics, we obtain the most likely fluorescence paths between chosen boundary conditions on the state, and compute approximate correlation functions between all stochastic variables via diagrammatic perturbation theory. Of particular interest are most-likely paths describing increasing energy during the florescence. Comparison to Monte Carlo numerical simulation and experiment will be discussed. This work was supported by US Army Research Office Grants No. W911NF-09-0-01417 and No. W911NF-15-1-0496, by NSF Grant DMR-1506081, by John Templeton Foundation Grant ID 58558, and by the DPSTT Project Thailand.

  6. Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.

    2009-01-01

    Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.

  7. Stochastic EM-based TFBS motif discovery with MITSU

    PubMed Central

    Kilpatrick, Alastair M.; Ward, Bruce; Aitken, Stuart

    2014-01-01

    Motivation: The Expectation–Maximization (EM) algorithm has been successfully applied to the problem of transcription factor binding site (TFBS) motif discovery and underlies the most widely used motif discovery algorithms. In the wider field of probabilistic modelling, the stochastic EM (sEM) algorithm has been used to overcome some of the limitations of the EM algorithm; however, the application of sEM to motif discovery has not been fully explored. Results: We present MITSU (Motif discovery by ITerative Sampling and Updating), a novel algorithm for motif discovery, which combines sEM with an improved approximation to the likelihood function, which is unconstrained with regard to the distribution of motif occurrences within the input dataset. The algorithm is evaluated quantitatively on realistic synthetic data and several collections of characterized prokaryotic TFBS motifs and shown to outperform EM and an alternative sEM-based algorithm, particularly in terms of site-level positive predictive value. Availability and implementation: Java executable available for download at http://www.sourceforge.net/p/mitsu-motif/, supported on Linux/OS X. Contact: a.m.kilpatrick@sms.ed.ac.uk PMID:24931999

  8. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    NASA Astrophysics Data System (ADS)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  9. Stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  10. Nonlinear Kalman filter based on duality relations between continuous and discrete-state stochastic processes

    NASA Astrophysics Data System (ADS)

    Ohkubo, Jun

    2015-10-01

    An alternative application of duality relations of stochastic processes is demonstrated. Although conventional usages of the duality relations need analytical solutions for the dual processes, here I employ numerical solutions of the dual processes and investigate the usefulness. As a demonstration, estimation problems of hidden variables in stochastic differential equations are discussed. Employing algebraic probability theory, a little complicated birth-death process is derived from the stochastic differential equations, and an estimation method based on the ensemble Kalman filter is proposed. As a result, the possibility for making faster computational algorithms based on the duality concepts is shown.

  11. Nonlinear Kalman filter based on duality relations between continuous and discrete-state stochastic processes.

    PubMed

    Ohkubo, Jun

    2015-10-01

    An alternative application of duality relations of stochastic processes is demonstrated. Although conventional usages of the duality relations need analytical solutions for the dual processes, here I employ numerical solutions of the dual processes and investigate the usefulness. As a demonstration, estimation problems of hidden variables in stochastic differential equations are discussed. Employing algebraic probability theory, a little complicated birth-death process is derived from the stochastic differential equations, and an estimation method based on the ensemble Kalman filter is proposed. As a result, the possibility for making faster computational algorithms based on the duality concepts is shown. PMID:26565359

  12. A new approach to the assessment of stochastic errors of radio source position catalogues

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-10-01

    Assessing the external stochastic errors of radio source position catalogues derived from VLBI observations is important for tasks such as estimating the quality of the catalogues and their weighting during combination. One of the widely used methods to estimate these errors is the three-cornered-hat technique, which can be extended to the N-cornered-hat technique. A critical point of this method is how to properly account for the correlations between the compared catalogues. We present a new approach to solving this problem that is suitable for simultaneous investigations of several catalogues. To compute the correlation between two catalogues A and B, the differences between these catalogues and a third arbitrary catalogue C are computed. Then the correlation between these differences is considered as an estimate of the correlation between catalogues A and B. The average value of these estimates over all catalogues C is taken as a final estimate of the target correlation. In this way, an exhaustive search of all possible combinations allows one to compute the paired correlations between all catalogues. As an additional refinement of the method, we introduce the concept of weighted correlation coefficient. This technique was applied to nine recently published radio source position catalogues. We found large systematic differences between catalogues, that significantly impact determination of their stochastic errors. Finally, we estimated the stochastic errors of the nine catalogues.

  13. Assessment of BTEX-induced health risk under multiple uncertainties at a petroleum-contaminated site: An integrated fuzzy stochastic approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Huang, Guo H.

    2011-12-01

    Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.

  14. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    SciTech Connect

    Karagiannis, Georgios Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.

  15. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas; Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  16. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  17. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  18. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  19. A multivariate and stochastic approach to identify key variables to rank dairy farms on profitability.

    PubMed

    Atzori, A S; Tedeschi, L O; Cannas, A

    2013-05-01

    The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21

  20. An integrated stochastic approach to the assessment of agricultural water demand and adaptation to water scarcity

    NASA Astrophysics Data System (ADS)

    Foster, T.; Butler, A. P.; McIntyre, N.

    2012-12-01

    Increasing water demands from growing populations coupled with changing water availability, for example due to climate change, are likely to increase water scarcity. Agriculture will be exposed to risk due to the importance of reliable water supplies as an input to crop production. To assess the efficiency of agricultural adaptation options requires a sound understanding of the relationship between crop growth and water application. However, most water resource planning models quantify agricultural water demand using highly simplified, temporally lumped estimated crop-water production functions (CWPFs). Such CWPFs fail to capture the biophysical complexities in crop-water relations and mischaracterise farmers ability to respond to water scarcity. Application of these models in policy analyses will be ineffective and may lead to unsustainable water policies. Crop simulation models provide an alternative means of defining the complex nature of the CWPF. Here we develop a daily water-limited crop model for this purpose. The model is based on the approach used in the FAO's AquaCrop model, balancing biophysical and computational complexities. We further develop the model by incorporating improved simulation routines to calculate the distribution of water through the soil profile. Consequently we obtain a more realistic representation of the soil water balance with concurrent improvements in the prediction of water-limited yield. We introduce a methodology to utilise this model for the generation of stochastic crop-water production functions (SCWPFs). This is achieved by running the model iteratively with both time series of climatic data and variable quantities of irrigation water, employing a realistic rule-based approach to farm irrigation scheduling. This methodology improves the representation of potential crop yields, capturing both the variable effects of water deficits on crop yield and the stochastic nature of the CWPF due to climatic variability. Application to

  1. The Heliospheric Transport of Protons and Anti-Protons a Stochastic Modelling Approach to Pamela Observations

    NASA Astrophysics Data System (ADS)

    Strauss, R. D.; Potgieter, M. S.; Boezio, M.; de Simone, N.; di Felice, V.; Kopp, A.; Büsching, I.

    2012-08-01

    Using a newly developed 5D comic ray modulation model, we study the modulation of galactic protons and anti-protons inside the heliosphere. This is done for different heliospheric magnetic field polarity cycles, which, in combination with drifts, lead to charge-sign dependent cosmic ray transport. Computed energy spectra and intensity ratios for the different cosmic ray populations are shown and discussed. Modelling results are extensively compared to recent observations made by the PAMELA space borne particle detector. Using a stochastic transport approach, we also show pseudo-particle traces, illustrating the principle behind charge-sign dependent modulation.

  2. Effects of Extrinsic Mortality on the Evolution of Aging: A Stochastic Modeling Approach

    PubMed Central

    Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam

    2014-01-01

    The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non

  3. Acceleration of stochastic seismic inversion in OpenCL-based heterogeneous platforms

    NASA Astrophysics Data System (ADS)

    Ferreirinha, Tomás; Nunes, Rúben; Azevedo, Leonardo; Soares, Amílcar; Pratas, Frederico; Tomás, Pedro; Roma, Nuno

    2015-05-01

    Seismic inversion is an established approach to model the geophysical characteristics of oil and gas reservoirs, being one of the basis of the decision making process in the oil&gas exploration industry. However, the required accuracy levels can only be attained by dealing and processing significant amounts of data, often leading to consequently long execution times. To overcome this issue and to allow the development of larger and higher resolution elastic models of the subsurface, a novel parallelization approach is herein proposed targeting the exploitation of GPU-based heterogeneous systems based on a unified OpenCL programming framework, to accelerate a state of art Stochastic Seismic Amplitude versus Offset Inversion algorithm. To increase the parallelization opportunities while ensuring model fidelity, the proposed approach is based on a careful and selective relaxation of some spatial dependencies. Furthermore, to take into consideration the heterogeneity of modern computing systems, usually composed of several and different accelerating devices, multi-device parallelization strategies are also proposed. When executed in a dual-GPU system, the proposed approach allows reducing the execution time in up to 30 times, without compromising the quality of the obtained models.

  4. Multidimensional characterization of stochastic dynamical systems based on multiple perturbations and measurements.

    PubMed

    Kryvohuz, Maksym; Mukamel, Shaul

    2015-06-01

    Generalized nonlinear response theory is presented for stochastic dynamical systems. Experiments in which multiple measurements of dynamical quantities are used along with multiple perturbations of parameters of dynamical systems are described by generalized response functions (GRFs). These constitute a new type of multidimensional measures of stochastic dynamics either in the time or the frequency domains. Closed expressions for GRFs in stochastic dynamical systems are derived and compared with numerical non-equilibrium simulations. Several types of perturbations are considered: impulsive and periodic perturbations of temperature and impulsive perturbations of coordinates. The present approach can be used to study various types of stochastic processes ranging from single-molecule conformational dynamics to chemical kinetics of finite-size reactors such as biocells. PMID:26049450

  5. Multidimensional characterization of stochastic dynamical systems based on multiple perturbations and measurements

    SciTech Connect

    Kryvohuz, Maksym Mukamel, Shaul

    2015-06-07

    Generalized nonlinear response theory is presented for stochastic dynamical systems. Experiments in which multiple measurements of dynamical quantities are used along with multiple perturbations of parameters of dynamical systems are described by generalized response functions (GRFs). These constitute a new type of multidimensional measures of stochastic dynamics either in the time or the frequency domains. Closed expressions for GRFs in stochastic dynamical systems are derived and compared with numerical non-equilibrium simulations. Several types of perturbations are considered: impulsive and periodic perturbations of temperature and impulsive perturbations of coordinates. The present approach can be used to study various types of stochastic processes ranging from single-molecule conformational dynamics to chemical kinetics of finite-size reactors such as biocells.

  6. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  7. Consentaneous Agent-Based and Stochastic Model of the Financial Markets

    PubMed Central

    Gontis, Vygintas; Kononovicius, Aleksejus

    2014-01-01

    We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation. PMID:25029364

  8. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    SciTech Connect

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.

  9. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGESBeta

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  10. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  11. A stochastic approach for quantifying immigrant integration: the Spanish test case

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  12. A stochastic model updating method for parameter variability quantification based on response surface models and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo

    2012-11-01

    Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

  13. Brownian-motion based simulation of stochastic reaction-diffusion systems for affinity based sensors

    NASA Astrophysics Data System (ADS)

    Tulzer, Gerhard; Heitzinger, Clemens

    2016-04-01

    In this work, we develop a 2D algorithm for stochastic reaction-diffusion systems describing the binding and unbinding of target molecules at the surfaces of affinity-based sensors. In particular, we simulate the detection of DNA oligomers using silicon-nanowire field-effect biosensors. Since these devices are uniform along the nanowire, two dimensions are sufficient to capture the kinetic effects features. The model combines a stochastic ordinary differential equation for the binding and unbinding of target molecules as well as a diffusion equation for their transport in the liquid. A Brownian-motion based algorithm simulates the diffusion process, which is linked to a stochastic-simulation algorithm for association at and dissociation from the surface. The simulation data show that the shape of the cross section of the sensor yields areas with significantly different target-molecule coverage. Different initial conditions are investigated as well in order to aid rational sensor design. A comparison of the association/hybridization behavior for different receptor densities allows optimization of the functionalization setup depending on the target-molecule density.

  14. Brownian-motion based simulation of stochastic reaction-diffusion systems for affinity based sensors.

    PubMed

    Tulzer, Gerhard; Heitzinger, Clemens

    2016-04-22

    In this work, we develop a 2D algorithm for stochastic reaction-diffusion systems describing the binding and unbinding of target molecules at the surfaces of affinity-based sensors. In particular, we simulate the detection of DNA oligomers using silicon-nanowire field-effect biosensors. Since these devices are uniform along the nanowire, two dimensions are sufficient to capture the kinetic effects features. The model combines a stochastic ordinary differential equation for the binding and unbinding of target molecules as well as a diffusion equation for their transport in the liquid. A Brownian-motion based algorithm simulates the diffusion process, which is linked to a stochastic-simulation algorithm for association at and dissociation from the surface. The simulation data show that the shape of the cross section of the sensor yields areas with significantly different target-molecule coverage. Different initial conditions are investigated as well in order to aid rational sensor design. A comparison of the association/hybridization behavior for different receptor densities allows optimization of the functionalization setup depending on the target-molecule density. PMID:26939610

  15. Dissection of a Complex Disease Susceptibility Region Using a Bayesian Stochastic Search Approach to Fine Mapping

    PubMed Central

    Wallace, Chris; Cutler, Antony J; Pontikos, Nikolas; Pekalski, Marcin L; Burren, Oliver S; Cooper, Jason D; García, Arcadio Rubio; Ferreira, Ricardo C; Guo, Hui; Walker, Neil M; Smyth, Deborah J; Rich, Stephen S; Onengut-Gumuscu, Suna; Sawcer, Stephen J; Ban, Maria

    2015-01-01

    Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD) and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS) and type 1 diabetes (T1D) associations in the IL-2RA (CD25) gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3) and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data. PMID:26106896

  16. Dissection of a Complex Disease Susceptibility Region Using a Bayesian Stochastic Search Approach to Fine Mapping.

    PubMed

    Wallace, Chris; Cutler, Antony J; Pontikos, Nikolas; Pekalski, Marcin L; Burren, Oliver S; Cooper, Jason D; García, Arcadio Rubio; Ferreira, Ricardo C; Guo, Hui; Walker, Neil M; Smyth, Deborah J; Rich, Stephen S; Onengut-Gumuscu, Suna; Sawcer, Stephen J; Ban, Maria; Richardson, Sylvia; Todd, John A; Wicker, Linda S

    2015-06-01

    Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD) and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS) and type 1 diabetes (T1D) associations in the IL-2RA (CD25) gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3) and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data. PMID:26106896

  17. Phase transitions of macromolecular microsphere composite hydrogels based on the stochastic Cahn–Hilliard equation

    SciTech Connect

    Li, Xiao Ji, Guanghua Zhang, Hui

    2015-02-15

    We use the stochastic Cahn–Hilliard equation to simulate the phase transitions of the macromolecular microsphere composite (MMC) hydrogels under a random disturbance. Based on the Flory–Huggins lattice model and the Boltzmann entropy theorem, we develop a reticular free energy suit for the network structure of MMC hydrogels. Taking the random factor into account, with the time-dependent Ginzburg-Landau (TDGL) mesoscopic simulation method, we set up a stochastic Cahn–Hilliard equation, designated herein as the MMC-TDGL equation. The stochastic term in the equation is constructed appropriately to satisfy the fluctuation-dissipation theorem and is discretized on a spatial grid for the simulation. A semi-implicit difference scheme is adopted to numerically solve the MMC-TDGL equation. Some numerical experiments are performed with different parameters. The results are consistent with the physical phenomenon, which verifies the good simulation of the stochastic term.

  18. A binomial stochastic kinetic approach to the Michaelis-Menten mechanism

    NASA Astrophysics Data System (ADS)

    Lente, Gábor

    2013-05-01

    This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.

  19. A wavelet approach for development and application of a stochastic parameter simulation system

    NASA Astrophysics Data System (ADS)

    Miron, Adrian

    2001-07-01

    In this research a Stochastic Parameter Simulation System (SPSS) computer program employing wavelet techniques was developed. The SPSS was designed to fulfill two key functional requirements: (1) To be able to analyze any steady state plant signal, decompose it into its deterministic and stochastic components, and then reconstruct a new, simulated signal that possesses exactly the same statistical noise characteristics as the actual signal; and (2) To be able to filter out the principal serially-correlated, deterministic components from the analyzed signal so that the remaining stochastic signal can be analyzed with signal validation tools that are designed for signals drawn from independent random distributions. The results obtained using SPSS were compared to those obtained using the Argonne National Laboratory Reactor Parameter Simulation System (RPSS) which uses a Fourier transform methodology to achieve the same objectives. RPSS and SPSS results were compared for three sets of stationary signals, representing sensor readings independently recorded at three nuclear power plants. For all of the recorded signals, the wavelet technique provided a better approximation of the original signal than the Fourier procedure. For each signal, many wavelet-based decompositions were found by the SPSS methodology, all of which produced white and normally distributed signal residuals. In most cases, the Fourier-based analysis failed to completely eliminate the original signal serial-correlation from the residuals. The reconstructed signals produced by SPSS are also statistically closer to the original signal than the RPSS reconstructed signal. Another phase of the research demonstrated that SPSS could be used to enhance the reliability of the Multivariate Sensor Estimation Technique (MSET). MSET uses the Sequential Probability Ratio Test (SPRT) for its fault detection algorithm. By eliminating the MSET residual serial-correlation in the MSET training phase, the SPRT user

  20. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  1. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  2. Stochastic Modeling Approach for the Evaluation of Backbreak due to Blasting Operations in Open Pit Mines

    NASA Astrophysics Data System (ADS)

    Sari, Mehmet; Ghasemi, Ebrahim; Ataei, Mohammad

    2014-03-01

    Backbreak is an undesirable side effect of bench blasting operations in open pit mines. A large number of parameters affect backbreak, including controllable parameters (such as blast design parameters and explosive characteristics) and uncontrollable parameters (such as rock and discontinuities properties). The complexity of the backbreak phenomenon and the uncertainty in terms of the impact of various parameters makes its prediction very difficult. The aim of this paper is to determine the suitability of the stochastic modeling approach for the prediction of backbreak and to assess the influence of controllable parameters on the phenomenon. To achieve this, a database containing actual measured backbreak occurrences and the major effective controllable parameters on backbreak (i.e., burden, spacing, stemming length, powder factor, and geometric stiffness ratio) was created from 175 blasting events in the Sungun copper mine, Iran. From this database, first, a new site-specific empirical equation for predicting backbreak was developed using multiple regression analysis. Then, the backbreak phenomenon was simulated by the Monte Carlo (MC) method. The results reveal that stochastic modeling is a good means of modeling and evaluating the effects of the variability of blasting parameters on backbreak. Thus, the developed model is suitable for practical use in the Sungun copper mine. Finally, a sensitivity analysis showed that stemming length is the most important parameter in controlling backbreak.

  3. Stochastic rainfall modeling in West Africa: Parsimonious approaches for domestic rainwater harvesting assessment

    NASA Astrophysics Data System (ADS)

    Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.

    2008-10-01

    SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.

  4. A stochastic context free grammar based framework for analysis of protein sequences

    PubMed Central

    Dyrka, Witold; Nebel, Jean-Christophe

    2009-01-01

    Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free Grammar based framework has been

  5. Impact of Geological Characterization Uncertainties on Subsurface Flow & Transport Using a Stochastic Discrete Fracture Network Approach

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.

    2009-12-01

    Fractures and fracture networks are the principal pathways for transport of water and contaminants in groundwater systems, enhanced geothermal system fluids, migration of oil and gas, carbon dioxide leakage from carbon sequestration sites, and of radioactive and toxic industrial wastes from underground storage repositories. A major issue to overcome when characterizing a fractured reservoir is that of data limitation due to accessibility and affordability. Moreover, the ability to map discontinuities in the rock with available geological and geophysical tools tends to decrease particularly as the scale of the discontinuity goes down. Geological characterization data include measurements of fracture density, orientation, extent, and aperture, and are based on analysis of outcrops, borehole optical and acoustic televiewer logs, aerial photographs, and core samples, among other techniques. All of these measurements are taken at the field scale through a very sparse limited number of deep boreholes. These types of data are often reduced to probability distribution functions for predictive modeling and simulation in a stochastic framework such as a stochastic discrete fracture network. Stochastic discrete fracture network models enable, through Monte Carlo realizations and simulations, probabilistic assessment of flow and transport phenomena that are not adequately captured using continuum models. Despite the fundamental uncertainties inherited within the probabilistic reduction of the sparse data collected, very little work has been conducted on quantifying uncertainty on the reduced probabilistic distribution functions. In the current study, using nested Monte Carlo simulations, we present the impact of parameter uncertainties of the distribution functions of fracture density, orientation, aperture and size on the flow and transport using topological measures such as fracture connectivity, physical characteristics such as effective hydraulic conductivity tensors, and

  6. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14

  7. Fission dynamics of intermediate-fissility systems: A study within a stochastic three-dimensional approach

    NASA Astrophysics Data System (ADS)

    Vardaci, E.; Nadtochy, P. N.; Di Nitto, A.; Brondi, A.; La Rana, G.; Moro, R.; Rath, P. K.; Ashaduzzaman, M.; Kozulin, E. M.; Knyazheva, G. N.; Itkis, I. M.; Cinausero, M.; Prete, G.; Fabris, D.; Montagnoli, G.; Gelli, N.

    2015-09-01

    The system of intermediate fissility 132Ce has been studied experimentally and theoretically to investigate the dissipation properties of nuclear matter. Cross sections of fusion-fission and evaporation-residue channels together with light charged particle multiplicities in both channels, their spectra, light charged particle-evaporation residue angular correlations, and mass-energy distribution of fission fragments have been measured. Theoretical analysis has been performed using a multidimensional stochastic approach coupled with a Hauser-Feshbach treatment of particle evaporation. The main conclusions are that the full one-body shape-dependent dissipation mechanism allows the reproduction of the full set of experimental data and that after a time τd=5 ×10-21 s from the equilibrium configuration of the compound nucleus, fission decay can occur in a time that can span several orders of magnitude.

  8. The impact of trade costs on rare earth exports : a stochastic frontier estimation approach.

    SciTech Connect

    Sanyal, Prabuddha; Brady, Patrick Vane; Vugrin, Eric D.

    2013-09-01

    The study develops a novel stochastic frontier modeling approach to the gravity equation for rare earth element (REE) trade between China and its trading partners between 2001 and 2009. The novelty lies in differentiating betweenbehind the border' trade costs by China and theimplicit beyond the border costs' of China's trading partners. Results indicate that the significance level of the independent variables change dramatically over the time period. While geographical distance matters for trade flows in both periods, the effect of income on trade flows is significantly attenuated, possibly capturing the negative effects of financial crises in the developed world. Second, the total export losses due tobehind the border' trade costs almost tripled over the time period. Finally, looking atimplicit beyond the border' trade costs, results show China gaining in some markets, although it is likely that some countries are substituting away from Chinese REE exports.

  9. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations.

    PubMed

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew

    2016-08-01

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of

  10. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    NASA Astrophysics Data System (ADS)

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J. Andrew

    2016-08-01

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of

  11. A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators

    PubMed Central

    2016-01-01

    The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539

  12. A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators.

    PubMed

    Woods, Mae L; Leon, Miriam; Perez-Carrasco, Ruben; Barnes, Chris P

    2016-06-17

    The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539

  13. The effect of a rainfall and discharge variability on erosion rates in a highly active tectonic setting: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Braun, Jean; Deal, Eric; Andermann, Christoff

    2015-04-01

    The influence of climate on surface processes and consequently on landscape evolution is undeniably important; despite this, many fluvial landscape evolution models do not integrate an accurate or physically based parameterisation of precipitation, the climatic forcing most important for fluvial processes. This is likely due to two major challenges; first of all there is the difficulty in moving from the hourly, daily and monthly timescales most relevant to precipitation to the millennial timescales used in landscape evolution modelling. To confront this challenge, we adopt the approach of Tucker and Bras, 2000 and Lague, 2005, and upscale precipitation with a statistical parameterisation accounting for mean precipitation as well as short term (daily) variability. This technique is key in capturing and quantifying the importance of rare, extreme events. The second challenge stems from the fact that erosion rates are proportional not to precipitation, but rather to discharge, which results from a complex convolution of the regional precipitation patterns with the landscape. To address this second obstacle we present work that investigates the relationship between a stochastic description of precipitation and one of discharge, linking general patterns of precipitation and discharge rather than attempting to establish a deterministic relationship. To achieve this we model the effect of precipitation variability on runoff variability as well as compare associated precipitation and discharge measurements from a range of climatic regimes and spatial scales in the Himalayas. Using the results of this work, we integrate the statistical parameterisation of precipitation into a landscape evolution model, allowing us to explore the effect of realistic precipitation patterns, specifically precipitation variability, on the evolution of relief and topography. References Bras, R. L., & Tucker, G. E. (2000). A stochastic approach to modeling the role of rainfall variability in

  14. A new stochastic hydraulic conductivity approach for modeling one-dimensional vertical flow in variably saturated porous media.

    NASA Astrophysics Data System (ADS)

    Vrettas, M. D.; Fung, I. Y.

    2014-12-01

    The degree of carbon climate feedback by terrestrial ecosystems is intimately tied to the availability of moisture for photosynthesis, transpiration and decomposition. The vertical distribution of subsurface moisture and its accessibility for evapotranspiration is a key determinant of the fate of ecosystems and their feedback on the climate system. A time series of five years of high frequency (every 30 min) observations of water table at a research site in Northern California shows that the water tables, 18 meters below the surface, can respond in less than 8 hours to the first winter rains, suggesting very fast flow through micro-pores and fractured bedrock. Not quite as quickly as the water table rises after a heavy rain, the elevated water level recedes, contributing to down-slope flow and stream flow. The governing equation of our model uses the well-known Richards' equation, which is a non-linear PDE, derived by applying the continuity requirement to Darcy's law. The most crucial parameter of this PDE is the hydraulic conductivity K(θ), which describes the speed at which water can move in the underground. We specify a saturation profile as a function of depth (i.e. Ksat(z)) and allow K(θ) to vary not only with the soil moisture saturation but also include a stochastic component which mimics the effects of fracture flow and other naturally occurring heterogeneity, that is evident in the subsurface. A large number of Monte Carlo simulation are performed in order to identify optimal settings for the new model, as well as analyze the results of this new approach on the available data. Initial findings from this exploratory work are encouraging and the next steps include testing this new stochastic approach on data from other sites and also apply ensemble based data assimilation algorithms in order to estimate model parameters with the available measurements.

  15. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes

    PubMed Central

    Hahl, Sayuri K.; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  16. Stochastic population forecasting based on combinations of expert evaluations within the Bayesian paradigm.

    PubMed

    Billari, Francesco C; Graziani, Rebecca; Melilli, Eugenio

    2014-10-01

    This article suggests a procedure to derive stochastic population forecasts adopting an expert-based approach. As in previous work by Billari et al. (2012), experts are required to provide evaluations, in the form of conditional and unconditional scenarios, on summary indicators of the demographic components determining the population evolution: that is, fertility, mortality, and migration. Here, two main purposes are pursued. First, the demographic components are allowed to have some kind of dependence. Second, as a result of the existence of a body of shared information, possible correlations among experts are taken into account. In both cases, the dependence structure is not imposed by the researcher but rather is indirectly derived through the scenarios elicited from the experts. To address these issues, the method is based on a mixture model, within the so-called Supra-Bayesian approach, according to which expert evaluations are treated as data. The derived posterior distribution for the demographic indicators of interest is used as forecasting distribution, and a Markov chain Monte Carlo algorithm is designed to approximate this posterior. This article provides the questionnaire designed by the authors to collect expert opinions. Finally, an application to the forecast of the Italian population from 2010 to 2065 is proposed. PMID:25124024

  17. Stochastic Modeling of Usage Patterns in a Web-Based Information System.

    ERIC Educational Resources Information Center

    Chen, Hui-Min; Cooper, Michael D.

    2002-01-01

    Uses continuous-time stochastic models, mainly based on semi-Markov chains, to derive user state transition patterns, both in rates and in probabilities, in a Web-based information system. Describes search sessions from transaction logs of the University of California's MELVYL library catalog system and discusses sequential dependency. (Author/LRW)

  18. Binomial distribution based τ-leap accelerated stochastic simulation

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2005-01-01

    Recently, Gillespie introduced the τ-leap approximate, accelerated stochastic Monte Carlo method for well-mixed reacting systems [J. Chem. Phys. 115, 1716 (2001)]. In each time increment of that method, one executes a number of reaction events, selected randomly from a Poisson distribution, to enable simulation of long times. Here we introduce a binomial distribution τ-leap algorithm (abbreviated as BD-τ method). This method combines the bounded nature of the binomial distribution variable with the limiting reactant and constrained firing concepts to avoid negative populations encountered in the original τ-leap method of Gillespie for large time increments, and thus conserve mass. Simulations using prototype reaction networks show that the BD-τ method is more accurate than the original method for comparable coarse-graining in time.

  19. Stochastic approach of gravitational waves in the presence of a decaying cosmological parameter from a 5D vacuum

    NASA Astrophysics Data System (ADS)

    Gomez Martínez, S. P.; da Silva, L. F. P.; Madriz Aguilar, J. E.; Bellini, M.

    2007-08-01

    We develop an stochastic approach to study gravitational waves produced during the inflationary epoch under the presence of a decaying cosmological parameter, on a 5D geometrical background which is Riemann flat. We obtain that the squared tensor metric fluctuations depend strongly on the cosmological parameter $\\Lambda (t)$ and we finally illustrate the formalism with an example of a decaying $\\Lambda(t)$.

  20. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi

  1. A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach for contaminated sites management.

    PubMed

    Hu, Yan; Wen, Jing-Ya; Li, Xiao-Li; Wang, Da-Zhou; Li, Yu

    2013-10-15

    A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach was developed for contaminated sites management. The contaminant concentrations were simulated by a validated interval dynamic multimedia fugacity model, and different guideline values for the same contaminant were represented as a fuzzy environmental guideline. Then, the probability of violating environmental guideline (Pv) can be determined by comparison between the modeled concentrations and the fuzzy environmental guideline, and the constructed relationship between the Pvs and environmental risk levels was used to assess the environmental risk level. The developed approach was applied to assess the integrated environmental risk at a case study site in China, simulated from 1985 to 2020. Four scenarios were analyzed, including "residential land" and "industrial land" environmental guidelines under "strict" and "loose" strictness. It was found that PAH concentrations will increase steadily over time, with soil found to be the dominant sink. Source emission in soil was the leading input and atmospheric sedimentation was the dominant transfer process. The integrated environmental risks primarily resulted from petroleum spills and coke ovens, while the soil environmental risks came from coal combustion. The developed approach offers an effective tool for quantifying variability and uncertainty in the dynamic multimedia integrated environmental risk assessment and the contaminated site management. PMID:23995555

  2. A new stochastic approach for the simulation of agglomeration between colloidal particles.

    PubMed

    Henry, Christophe; Minier, Jean-Pierre; Pozorski, Jacek; Lefèvre, Grégory

    2013-11-12

    This paper presents a stochastic approach for the simulation of particle agglomeration, which is addressed as a two-step process: first, particles are transported by the flow toward each other (collision step) and, second, short-ranged particle-particle interactions lead either to the formation of an agglomerate or prevent it (adhesion step). Particle collisions are treated in the framework of Lagrangian approaches where the motions of a large number of particles are explicitly tracked. The key idea to detect collisions is to account for the whole continuous relative trajectory of particle pairs within each time step and not only the initial and final relative distances between two possible colliding partners at the beginning and at the end of the time steps. The present paper is thus the continuation of a previous work (Mohaupt M., Minier, J.-P., Tanière, A. A new approach for the detection of particle interactions for large-inertia and colloidal particles in a turbulent flow, Int. J. Multiphase Flow, 2011, 37, 746-755) and is devoted to an extension of the approach to the treatment of particle agglomeration. For that purpose, the attachment step is modeled using the DLVO theory (Derjaguin and Landau, Verwey and Overbeek) which describes particle-particle interactions as the sum of van der Waals and electrostatic forces. The attachment step is coupled with the collision step using a common energy balance approach, where particles are assumed to agglomerate only if their relative kinetic energy is high enough to overcome the maximum repulsive interaction energy between particles. Numerical results obtained with this model are shown to compare well with available experimental data on agglomeration. These promising results assert the applicability of the present modeling approach over a whole range of particle sizes (even nanoscopic) and solution conditions (both attractive and repulsive cases). PMID:24111685

  3. Path integral approach to closed-form option pricing formulas with applications to stochastic volatility and interest rate models

    NASA Astrophysics Data System (ADS)

    Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.

    2008-07-01

    We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.

  4. Stochastic modeling of rainfall

    SciTech Connect

    Guttorp, P.

    1996-12-31

    We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.

  5. Stochastic dynamics of charge fluctuations in dusty plasma: A non-Markovian approach

    SciTech Connect

    Asgari, H.; Muniandy, S. V.; Wong, C. S.

    2011-08-15

    Dust particles in typical laboratory plasma become charged largely by collecting electrons and/or ions. Most of the theoretical studies in dusty plasma assume that the grain charge remains constant even though it fluctuates due to the discrete nature of the charge. The rates of ions and electrons absorption depend on the grain charge, hence its temporal evolution. Stochastic charging model based on the standard Langevin equation assumes that the underlying process is Markovian. In this work, the memory effect in dust charging dynamics is incorporated using fractional calculus formalism. The resulting fractional Langevin equation is solved to obtain the amplitude and correlation function for the dust charge fluctuation. It is shown that the effects of ion-neutral collisions can be interpreted in phenomenological sense through the nonlocal fractional order derivative.

  6. Stochastic approach to correlations beyond the mean field with the Skyrme interaction

    NASA Astrophysics Data System (ADS)

    Fukuoka, Y.; Nakatsukasa, T.; Funaki, Y.; Yabana, K.

    2012-10-01

    Large-scale calculation based on the multi-configuration Skyrme density functional theory is performed for the light N = Z even-even nucleus, 12C. Stochastic procedures and the imaginary-time evolution are utilized to prepare many Slater determinants. Each state is projected on eigenstates of parity and angular momentum. Then, performing the configuration mixing calculation with the Skyrme Hamiltonian, we obtain low-lying energy-eigenstates and their explicit wave functions. The generated wave functions are completely free from any assumption and symmetry restriction. Excitation spectra and transition probabilities are well reproduced, not only for the ground-state band, but for negative-parity excited states and the Hoyle state.

  7. Coupled planning of water resources and agricultural landuse based on an inexact-stochastic programming model

    NASA Astrophysics Data System (ADS)

    Dong, Cong; Huang, Guohe; Tan, Qian; Cai, Yanpeng

    2014-03-01

    Water resources are fundamental for support of regional development. Effective planning can facilitate sustainable management of water resources to balance socioeconomic development and water conservation. In this research, coupled planning of water resources and agricultural land use was undertaken through the development of an inexact-stochastic programming approach. Such an inexact modeling approach was the integration of interval linear programming and chance-constraint programming methods. It was employed to successfully tackle uncertainty in the form of interval numbers and probabilistic distributions existing in water resource systems. Then it was applied to a typical regional water resource system for demonstrating its applicability and validity through generating efficient system solutions. Based on the process of modeling formulation and result analysis, the developed model could be used for helping identify optimal water resource utilization patterns and the corresponding agricultural land-use schemes in three sub-regions. Furthermore, a number of decision alternatives were generated under multiple water-supply conditions, which could help decision makers identify desired management policies.

  8. Stochastic investigation of two-dimensional cross sections of rocks based on the climacogram

    NASA Astrophysics Data System (ADS)

    Kalamioti, Anna; Dimitriadis, Panayiotis; Tzouka, Katerina; Lerias, Eleutherios; Koutsoyiannis, Demetris

    2016-04-01

    The statistical properties of soil and rock formations are essential for the characterization of the porous medium geological structure as well as for the prediction of its transport properties in groundwater modelling. We investigate two-dimensional cross sections of rocks in terms of stochastic structure of its morphology quantified by the climacogram (i.e., variance of the averaged process vs. scale). The analysis is based both in microscale and macroscale data, specifically from Scanning Electron Microscope (SEM) pictures and from field photos, respectively. We identify and quantify the stochastic properties with emphasis on the large scale type of decay (exponentially or power type, else known as Hurst-Kolmogorov behaviour). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  9. Robust H infinity-stabilization design in gene networks under stochastic molecular noises: fuzzy-interpolation approach.

    PubMed

    Chen, Bor-Sen; Chang, Yu-Te; Wang, Yu-Chao

    2008-02-01

    Molecular noises in gene networks come from intrinsic fluctuations, transmitted noise from upstream genes, and the global noise affecting all genes. Knowledge of molecular noise filtering in gene networks is crucial to understand the signal processing in gene networks and to design noise-tolerant gene circuits for synthetic biology. A nonlinear stochastic dynamic model is proposed in describing a gene network under intrinsic molecular fluctuations and extrinsic molecular noises. The stochastic molecular-noise-processing scheme of gene regulatory networks for attenuating these molecular noises is investigated from the nonlinear robust stabilization and filtering perspective. In order to improve the robust stability and noise filtering, a robust gene circuit design for gene networks is proposed based on the nonlinear robust H infinity stochastic stabilization and filtering scheme, which needs to solve a nonlinear Hamilton-Jacobi inequality. However, in order to avoid solving these complicated nonlinear stabilization and filtering problems, a fuzzy approximation method is employed to interpolate several linear stochastic gene networks at different operation points via fuzzy bases to approximate the nonlinear stochastic gene network. In this situation, the method of linear matrix inequality technique could be employed to simplify the gene circuit design problems to improve robust stability and molecular-noise-filtering ability of gene networks to overcome intrinsic molecular fluctuations and extrinsic molecular noises. PMID:18270080

  10. Beam Based Measurements for Stochastic Cooling Systems at Fermilab

    SciTech Connect

    Lebedev, V.A.; Pasquinelli, R.J.; Werkema, S.J.; /Fermilab

    2007-09-13

    Improvement of antiproton stacking rates has been pursued for the last twenty years at Fermilab. The last twelve months have been dedicated to improving the computer model of the Stacktail system. The production of antiprotons encompasses the use of the entire accelerator chain with the exception of the Tevatron. In the Antiproton Source two storage rings, the Debuncher and Accumulator are responsible for the accumulation of antiprotons in quantities that can exceed 2 x 10{sup 12}, but more routinely, stacks of 5 x 10{sup 11} antiprotons are accumulated before being transferred to the Recycler ring. Since the beginning of this recent enterprise, peak accumulation rates have increased from 2 x 10{sup 11} to greater than 2.3 x 10{sup 11} antiprotons per hour. A goal of 3 x 10{sup 11} per hour has been established. Improvements to the stochastic cooling systems are but a part of this current effort. This paper will discuss Stacktail system measurements and experienced system limitations.

  11. A simplified BBGKY hierarchy for correlated fermions from a stochastic mean-field approach

    NASA Astrophysics Data System (ADS)

    Lacroix, Denis; Tanimura, Yusuke; Ayik, Sakir; Yilmaz, Bulent

    2016-04-01

    The stochastic mean-field (SMF) approach allows to treat correlations beyond mean-field using a set of independent mean-field trajectories with appropriate choice of fluctuating initial conditions. We show here that this approach is equivalent to a simplified version of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy between one-, two-, ..., N -body degrees of freedom. In this simplified version, one-body degrees of freedom are coupled to fluctuations to all orders while retaining only specific terms of the general BBGKY hierarchy. The use of the simplified BBGKY is illustrated with the Lipkin-Meshkov-Glick (LMG) model. We show that a truncated version of this hierarchy can be useful, as an alternative to the SMF, especially in the weak coupling regime to get physical insight in the effect beyond mean-field. In particular, it leads to approximate analytical expressions for the quantum fluctuations both in the weak and strong coupling regime. In the strong coupling regime, it can only be used for short time evolution. In that case, it gives information on the evolution time-scale close to a saddle point associated to a quantum phase-transition. For long time evolution and strong coupling, we observed that the simplified BBGKY hierarchy cannot be truncated and only the full SMF with initial sampling leads to reasonable results.

  12. The Interaction Between Plant Life History Traits and the Riverine Landscape: a Stochastic Simulation Approach

    NASA Astrophysics Data System (ADS)

    Hatfield, C.; Shao, N.

    2005-05-01

    At the level of the watershed, the riverine habitat represents a spatially distributed yet interconnected landscape element. The spatial organization of the riverine landscape determines the distribution and extent of habitats and the interconnectivity influences how species access riverine habitat elements. A central question is what and how characteristics of a species affect its performance in the context of a river network. We used a spatially explicit, stochastic simulation modeling approach to explore how interconnectivity and complexity of the stream network potentially interacts with life history traits in determining riparian plant species persistence and abundance. We varied life history traits and stream network complexity in a factorial design. For each factorial combination, a new species was introduced to an established riparian community. We evaluated the new species and the community responses using various metrics including rate of spread and abundance. Interaction strengths varied between different life history traits depending on network complexity, but persistence and success of a new species was determined by the combination species life history traits not a single or combination of a few traits. This work underscores the need to better understand life histories using multiple pathways of investigation including models, field and experimental approaches.

  13. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    PubMed

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data. PMID:27165151

  14. A stochastic vision-based model inspired by zebrafish collective behaviour in heterogeneous environments

    PubMed Central

    Collignon, Bertrand; Séguret, Axel; Halloy, José

    2016-01-01

    Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173

  15. A stochastic vision-based model inspired by zebrafish collective behaviour in heterogeneous environments.

    PubMed

    Collignon, Bertrand; Séguret, Axel; Halloy, José

    2016-01-01

    Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173

  16. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    SciTech Connect

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  17. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification. PMID:25415948

  18. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  19. SLFP: A stochastic linear fractional programming approach for sustainable waste management

    SciTech Connect

    Zhu, H.; Huang, G.H.

    2011-12-15

    Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.

  20. Evolutionary dynamics of imatinib-treated leukemic cells by stochastic approach

    NASA Astrophysics Data System (ADS)

    Pizzolato, Nicola; Valenti, Davide; Adorno, Dominique; Spagnolo, Bernardo

    2009-09-01

    The evolutionary dynamics of a system of cancerous cells in a model of chronic myeloid leukemia (CML) is investigated by a statistical approach. Cancer progression is explored by applying a Monte Carlo method to simulate the stochastic behavior of cell reproduction and death in a population of blood cells which can experience genetic mutations. In CML front line therapy is represented by the tyrosine kinase inhibitor imatinib which strongly affects the reproduction of leukemic cells only. In this work, we analyze the effects of a targeted therapy on the evolutionary dynamics of normal, first-mutant and cancerous cell populations. Several scenarios of the evolutionary dynamics of imatinib-treated leukemic cells are described as a consequence of the efficacy of the different modelled therapies. We show how the patient response to the therapy changes when a high value of the mutation rate from healthy to cancerous cells is present. Our results are in agreement with clinical observations. Unfortunately, development of resistance to imatinib is observed in a fraction of patients, whose blood cells are characterized by an increasing number of genetic alterations. We find that the occurrence of resistance to the therapy can be related to a progressive increase of deleterious mutations.

  1. Growth of aerosols in Titan's atmosphere and related time scales - A stochastic approach

    NASA Astrophysics Data System (ADS)

    Rannou, P.; Cabane, M.; Chassefiere, E.

    1993-05-01

    The evolution of Titan's aerosols is studied from their production altitude down to the ground using a stochastic approach. A background aerosol distribution is assumed, obtained from previous Eulerian modelling. and the evolution of a 'tagged' particle, released near the formation altitude, is followed by simulating in a random way its growth through coagulation with particles of the background distribution. The two distinct growth stages proposed by Cabane et al. (1992) to explain the formation of monomers and subsequent aggregates are confirmed. The first stage may be divided into two parts. First, within roughly one terrestrial day, particles grow mainly through collisions with larger particles. They reach the size of monomer through typically one to five such collisions. Second, within a few terrestrial days to roughly one terrestrial month, particles evolve mainly by collisions with continuously created small particles and acquire their compact spherical structure. In the second stage, whose duration is roughly 30 terrestrial years, or one Titan's seasonal cycle, particles grow by cluster-cluster aggregation during their fall through the atmosphere and reach, at low stratospheric levels, a typical radius of 0.4-0.5 micron.

  2. Stochastic master equation approach for analysis of remote entanglement with Josephson parametric converter amplifier

    NASA Astrophysics Data System (ADS)

    Silveri, M.; Zalys-Geller, E.; Hatridge, M.; Leghtas, Z.; Devoret, M. H.; Girvin, S. M.

    2015-03-01

    In the remote entanglement process, two distant stationary qubits are entangled with separate flying qubits and the which-path information is erased from the flying qubits by interference effects. As a result, an observer cannot tell from which of the two sources a signal came and the probabilistic measurement process generates perfect heralded entanglement between the two signal sources. Notably, the two stationary qubits are spatially separated and there is no direct interaction between them. We study two transmon qubits in superconducting cavities connected to a Josephson Parametric Converter (JPC). The qubit information is encoded in the traveling wave leaking out from each cavity. Remarkably, the quantum-limited phase-preserving amplification of two traveling waves provided by the JPC can work as a which-path information eraser. By using a stochastic master approach we demonstrate the probabilistic production of heralded entangled states and that unequal qubit-cavity pairs can be made indistinguishable by simple engineering of driving fields. Additionally, we will derive measurement rates, measurement optimization strategies and discuss the effects of finite amplification gain, cavity losses, and qubit relaxations and dephasing. Work supported by IARPA, ARO and NSF.

  3. Localized dynamic kinetic-energy-based models for stochastic coherent adaptive large eddy simulation

    NASA Astrophysics Data System (ADS)

    De Stefano, Giuliano; Vasilyev, Oleg V.; Goldstein, Daniel E.

    2008-04-01

    Stochastic coherent adaptive large eddy simulation (SCALES) is an extension of the large eddy simulation approach in which a wavelet filter-based dynamic grid adaptation strategy is employed to solve for the most "energetic" coherent structures in a turbulent field while modeling the effect of the less energetic background flow. In order to take full advantage of the ability of the method in simulating complex flows, the use of localized subgrid-scale models is required. In this paper, new local dynamic one-equation subgrid-scale models based on both eddy-viscosity and non-eddy-viscosity assumptions are proposed for SCALES. The models involve the definition of an additional field variable that represents the kinetic energy associated with the unresolved motions. This way, the energy transfer between resolved and residual flow structures is explicitly taken into account by the modeling procedure without an equilibrium assumption, as in the classical Smagorinsky approach. The wavelet-filtered incompressible Navier-Stokes equations for the velocity field, along with the additional evolution equation for the subgrid-scale kinetic energy variable, are numerically solved by means of the dynamically adaptive wavelet collocation solver. The proposed models are tested for freely decaying homogeneous turbulence at Reλ=72. It is shown that the SCALES results, obtained with less than 0.5% of the total nonadaptive computational nodes, closely match reference data from direct numerical simulation. In contrast to classical large eddy simulation, where the energetic small scales are poorly simulated, the agreement holds not only in terms of global statistical quantities but also in terms of spectral distribution of energy and, more importantly, enstrophy all the way down to the dissipative scales.

  4. CONVOLUTION APPROACH TO EVALUATING INTAKE DISTRIBUTIONS FOR INHALED PLUTONIUM DIOXIDE FOR THE STOCHASTIC INTAKE PARADIGM

    EPA Science Inventory

    For airborne toxic particles, the stochastic intake (SI) paradigm involves relativelylow numbers of particles that are presented for inhalation. Each person at risk may inhale adifferent number of particles, including zero particles. For such exposure scenarios, probabilistic d...

  5. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids. PMID:25532191

  6. Random Walk-Based Solution to Triple Level Stochastic Point Location Problem.

    PubMed

    Jiang, Wen; Huang, De-Shuang; Li, Shenghong

    2016-06-01

    This paper considers the stochastic point location (SPL) problem as a learning mechanism trying to locate a point on a real line via interacting with a random environment. Compared to the stochastic environment in the literatures that confines the learning mechanism to moving in two directions, i.e., left or right, this paper introduces a general triple level stochastic environment which not only tells the learning mechanism to go left or right, but also informs it to stay unmoved. It is easy to understand, as we will prove in this paper, that the environment reported in the previous literatures is just a special case of the triple level environment. And a new learning algorithm, named as random walk-based triple level learning algorithm, is proposed to locate an unknown point under this new type of environment. In order to examine the performance of this algorithm, we divided the triple level SPL problems into four distinguished scenarios by the properties of the unknown point and the stochastic environment, and proved that even under the triple level nonstationary environment and the convergence condition having not being satisfied for some time, which are rarely considered in existing SPL problems, the proposed learning algorithm is still working properly whenever the unknown point is static or evolving with time. Extensive experiments validate our theoretical analyses and demonstrate that the proposed learning algorithms are quite effective and efficient. PMID:26168455

  7. Modifying stochastic slip distributions based on dynamic simulations for use in probabilistic tsunami hazard evaluation.

    NASA Astrophysics Data System (ADS)

    Murphy, Shane; Scala, Antonio; Lorito, Stefano; Herrero, Andre; Festa, Gaetano; Nielsen, Stefan; Trasatti, Elisa; Tonini, Roberto; Romano, Fabrizio; Molinari, Irene

    2016-04-01

    Stochastic slip modelling based on general scaling features with uniform slip probability over the fault plane is commonly employed in tsunami and seismic hazard. However, dynamic rupture effects driven by specific fault geometry and frictional conditions can potentially control the slip probability. Unfortunately dynamic simulations can be computationally intensive, preventing their extensive use for hazard analysis. The aim of this study is to produce a computationally efficient stochastic model that incorporates slip features observed in dynamic simulations. Dynamic rupture simulations are performed along a transect representing an average along-depth profile on the Tohoku subduction interface. The surrounding media, effective normal stress and friction law are simplified. Uncertainty in the nucleation location and pre-stress distribution are accounted for by using randomly located nucleation patches and stochastic pre-stress distributions for 500 simulations. The 1D slip distributions are approximated as moment magnitudes on the fault plane based on empirical scaling laws with the ensemble producing a magnitude range of 7.8 - 9.6. To measure the systematic spatial slip variation and its dependence on earthquake magnitude we introduce the concept of the Slip Probability density Function (SPF). We find that while the stochastic SPF is magnitude invariant, the dynamically derived SPF is magnitude-dependent and shows pronounced slip amplification near the surface for M > 8.6 events. To incorporate these dynamic features in the stochastic source models, we sub-divide the dynamically derived SPFs into 0.2 magnitude bins and compare them with the stochastic SPF in order to generate a depth and magnitude dependent transfer function. Applying this function to the traditional stochastic slip distribution allows for an approximated but efficient incorporation of regionally specific dynamic features in a modified source model, to be used specifically when a significant

  8. Control of confidence domains in the problem of stochastic attractors synthesis

    SciTech Connect

    Bashkirtseva, Irina

    2015-03-10

    A nonlinear stochastic control system is considered. We discuss a problem of the synthesis of stochastic attractors and suggest a constructive approach based on the design of the stochastic sensitivity and corresponding confidence domains. Details of this approach are demonstrated for the problem of the control of confidence ellipses near the equilibrium. An example of the control for stochastic Van der Pol equation is presented.

  9. Quantification of Hepatitis C Virus Cell-to-Cell Spread Using a Stochastic Modeling Approach

    PubMed Central

    Martin, Danyelle N.; Perelson, Alan S.; Dahari, Harel

    2015-01-01

    ABSTRACT It has been proposed that viral cell-to-cell transmission plays a role in establishing and maintaining chronic infections. Thus, understanding the mechanisms and kinetics of cell-to-cell spread is fundamental to elucidating the dynamics of infection and may provide insight into factors that determine chronicity. Because hepatitis C virus (HCV) spreads from cell to cell and has a chronicity rate of up to 80% in exposed individuals, we examined the dynamics of HCV cell-to-cell spread in vitro and quantified the effect of inhibiting individual host factors. Using a multidisciplinary approach, we performed HCV spread assays and assessed the appropriateness of different stochastic models for describing HCV focus expansion. To evaluate the effect of blocking specific host cell factors on HCV cell-to-cell transmission, assays were performed in the presence of blocking antibodies and/or small-molecule inhibitors targeting different cellular HCV entry factors. In all experiments, HCV-positive cells were identified by immunohistochemical staining and the number of HCV-positive cells per focus was assessed to determine focus size. We found that HCV focus expansion can best be explained by mathematical models assuming focus size-dependent growth. Consistent with previous reports suggesting that some factors impact HCV cell-to-cell spread to different extents, modeling results estimate a hierarchy of efficacies for blocking HCV cell-to-cell spread when targeting different host factors (e.g., CLDN1 > NPC1L1 > TfR1). This approach can be adapted to describe focus expansion dynamics under a variety of experimental conditions as a means to quantify cell-to-cell transmission and assess the impact of cellular factors, viral factors, and antivirals. IMPORTANCE The ability of viruses to efficiently spread by direct cell-to-cell transmission is thought to play an important role in the establishment and maintenance of viral persistence. As such, elucidating the dynamics of cell

  10. Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique

    NASA Astrophysics Data System (ADS)

    Mahootchi, M.; Fattahi, M.; Khakbazan, E.

    2011-11-01

    This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.

  11. Stochastic linearisation approach to performance analysis of feedback systems with asymmetric nonlinear actuators and sensors

    NASA Astrophysics Data System (ADS)

    Kabamba, P. T.; Meerkov, S. M.; Ossareh, H. R.

    2015-01-01

    This paper considers feedback systems with asymmetric (i.e., non-odd functions) nonlinear actuators and sensors. While the stability of such systems can be investigated using the theory of absolute stability and its extensions, the current paper provides a method for their performance analysis, i.e., reference tracking and disturbance rejection. Similar to the case of symmetric nonlinearities considered in earlier work, the development is based on the method of stochastic linearisation (which is akin to the describing functions, but intended to study general properties of dynamics, rather than periodic regimes). Unlike the symmetric case, however, the nonlinearities considered here must be approximated not only by a quasilinear gain, but a quasilinear bias as well. This paper derives transcendental equations for the quasilinear gain and bias, provides necessary and sufficient conditions for existence of their solutions, and, using simulations, investigates the accuracy of these solutions as a tool for predicting the quality of reference tracking and disturbance rejection. The method developed is then applied to performance analysis of specific systems, and the effect of asymmetry on their behaviour is investigated. In addition, this method is used to justify the recently discovered phenomenon of noise-induced loss of tracking in feedback systems with PI controllers, anti-windup, and sensor noise.

  12. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    PubMed Central

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601

  13. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  14. Ultra-Fast Data-Mining Hardware Architecture Based on Stochastic Computing

    PubMed Central

    Oliver, Antoni; Alomar, Miquel L.

    2015-01-01

    Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274

  15. Ultra-fast data-mining hardware architecture based on stochastic computing.

    PubMed

    Morro, Antoni; Canals, Vincent; Oliver, Antoni; Alomar, Miquel L; Rossello, Josep L

    2015-01-01

    Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274

  16. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    SciTech Connect

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.

  17. FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is wide spread. eal porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. uch a scheme based on the properties ...

  18. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  19. Desynchronization of stochastically synchronized chemical oscillators

    SciTech Connect

    Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan

    2015-12-15

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.

  20. Desynchronization of stochastically synchronized chemical oscillators.

    PubMed

    Snari, Razan; Tinsley, Mark R; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth

    2015-12-01

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed. PMID:26723155

  1. Desynchronization of stochastically synchronized chemical oscillators

    NASA Astrophysics Data System (ADS)

    Snari, Razan; Tinsley, Mark R.; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth

    2015-12-01

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.

  2. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    NASA Astrophysics Data System (ADS)

    Baars, Woutijn J.; Tinney, Charles E.

    2014-05-01

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimation (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.

  3. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    SciTech Connect

    Baars, Woutijn J.; Tinney, Charles E.

    2014-05-15

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimation (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.

  4. Spatial characterization and prediction of Neanderthal sites based on environmental information and stochastic modelling

    NASA Astrophysics Data System (ADS)

    Maerker, Michael; Bolus, Michael

    2014-05-01

    We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less

  5. Pinning distributed synchronization of stochastic dynamical networks: a mixed optimization approach.

    PubMed

    Tang, Yang; Gao, Huijun; Lu, Jianquan; Kurths, Jürgen Kurthsrgen

    2014-10-01

    This paper is concerned with the problem of pinning synchronization of nonlinear dynamical networks with multiple stochastic disturbances. Two kinds of pinning schemes are considered: 1) pinned nodes are fixed along the time evolution and 2) pinned nodes are switched from time to time according to a set of Bernoulli stochastic variables. Using Lyapunov function methods and stochastic analysis techniques, several easily verifiable criteria are derived for the problem of pinning distributed synchronization. For the case of fixed pinned nodes, a novel mixed optimization method is developed to select the pinned nodes and find feasible solutions, which is composed of a traditional convex optimization method and a constraint optimization evolutionary algorithm. For the case of switching pinning scheme, upper bounds of the convergence rate and the mean control gain are obtained theoretically. Simulation examples are provided to show the advantages of our proposed optimization method over previous ones and verify the effectiveness of the obtained results. PMID:25291734

  6. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on

  7. Comparison of Two Statistical Approaches to a Solution of the Stochastic Radiative Transfer Equation

    NASA Astrophysics Data System (ADS)

    Kirnos, I. V.; Tarasenkov, M. V.; Belov, V. V.

    2016-04-01

    The method of direct simulation of photon trajectories in a stochastic medium is compared with the method of closed equations suggested by G. A. Titov. A comparison is performed for the model of the stochastic medium in the form of a cloudy field of constant thickness comprising rectangular clouds whose boundaries are determined by a stationary Poisson flow of points. It is demonstrated that the difference between the calculated results can reach 20-30%; however, in some cases (for some sets of initial data) the difference is limited by 5% irrespective of the cloud cover index.

  8. Efficient uncertainty quantification in stochastic finite element analysis based on functional principal components

    NASA Astrophysics Data System (ADS)

    Bianchini, Ilaria; Argiento, Raffaele; Auricchio, Ferdinando; Lanzarone, Ettore

    2015-09-01

    The great influence of uncertainties on the behavior of physical systems has always drawn attention to the importance of a stochastic approach to engineering problems. Accordingly, in this paper, we address the problem of solving a Finite Element analysis in the presence of uncertain parameters. We consider an approach in which several solutions of the problem are obtained in correspondence of parameters samples, and propose a novel non-intrusive method, which exploits the functional principal component analysis, to get acceptable computational efforts. Indeed, the proposed approach allows constructing an optimal basis of the solutions space and projecting the full Finite Element problem into a smaller space spanned by this basis. Even if solving the problem in this reduced space is computationally convenient, very good approximations are obtained by upper bounding the error between the full Finite Element solution and the reduced one. Finally, we assess the applicability of the proposed approach through different test cases, obtaining satisfactory results.

  9. A Comparison of Three Stochastic Approaches for Parameter Estimation and Prediction of Steady-State Groundwater Flow: Nonlocal Moment Equations and Monte Carlo Method Coupled with Ensemble Kalman Filter and Geostatistical Stochastic Inversion.

    NASA Astrophysics Data System (ADS)

    Morales-Casique, E.; Briseño-Ruiz, J. V.; Hernández, A. F.; Herrera, G. S.; Escolero-Fuentes, O.

    2014-12-01

    We present a comparison of three stochastic approaches for estimating log hydraulic conductivity (Y) and predicting steady-state groundwater flow. Two of the approaches are based on the data assimilation technique known as ensemble Kalman filter (EnKF) and differ in the way prior statistical moment estimates (PSME) (required to build the Kalman gain matrix) are obtained. In the first approach, the Monte Carlo method is employed to compute PSME of the variables and parameters; we denote this approach by EnKFMC. In the second approach PSME are computed through the direct solution of approximate nonlocal (integrodifferential) equations that govern the spatial conditional ensemble means (statistical expectations) and covariances of hydraulic head (h) and fluxes; we denote this approach by EnKFME. The third approach consists of geostatistical stochastic inversion of the same nonlocal moment equations; we denote this approach by IME. In addition to testing the EnKFMC and EnKFME methods in the traditional manner that estimate Y over the entire grid, we propose novel corresponding algorithms that estimate Y at a few selected locations and then interpolate over all grid elements via kriging as done in the IME method. We tested these methods to estimate Y and h in steady-state groundwater flow in a synthetic two-dimensional domain with a well pumping at a constant rate, located at the center of the domain. In addition, to evaluate the performance of the estimation methods, we generated four unconditional different realizations that served as "true" fields. The results of our numerical experiments indicate that the three methods were effective in estimating h, reaching at least 80% of predictive coverage, although both EnKF were superior to the IME method. With respect to estimating Y, the three methods reached similar accuracy in terms of the mean absolute value error. Coupling the EnKF methods with kriging to estimate Y reduces to one fourth the CPU time required for data

  10. Quantifying the effects of three-dimensional subsurface heterogeneity on Hortonian runoff processes using a coupled numerical, stochastic approach

    NASA Astrophysics Data System (ADS)

    Maxwell, Reed M.; Kollet, Stefan J.

    2008-05-01

    The impact of three-dimensional subsurface heterogeneity in the saturated hydraulic conductivity on hillslope runoff generated by excess infiltration (so-called Hortonian runoff) is examined. A fully coupled, parallel subsurface-overland flow model is used to simulate runoff from an idealized hillslope. Ensembles of correlated, Gaussian random fields of saturated hydraulic conductivity are used to create uncertainty in spatial structure. A large number of cases are simulated in a parametric manner with the variance of the hydraulic conductivity varied over orders of magnitude. These cases include rainfall rates above, equal and below the geometric mean of the hydraulic conductivity distribution. These cases are also compared to theoretical representations of runoff production based on simple assumptions regarding (1) the rainfall rate and the value of hydraulic conductivity in the surface cell using a spatially-indiscriminant approach; and (2) a percolation-theory type approach to incorporate so-called runon. Simulations to test the ergodicity of hydraulic conductivity on hillslope runoff are also performed. Results show that three-dimensional stochastic representations of the subsurface hydraulic conductivity can create shallow perching, which has an important effect on runoff behavior that is different than previous two-dimensional analyses. The simple theories are shown to be very poor predictors of the fraction of saturated area that might runoff due to excess infiltration. It is also shown that ergodicity is reached only for a large number of integral scales (˜30) and not achieved for cases where the rainfall rate is less than the geometric mean of the saturated hydraulic conductivity.

  11. Stochastic cooling in RHIC

    SciTech Connect

    Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.

    2009-05-04

    After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.

  12. The design and testing of a first-order logic-based stochastic modeling language.

    SciTech Connect

    Pless, Daniel J.; Rammohan, Roshan; Chakrabarti, Chayan; Luger, George F.

    2005-06-01

    We have created a logic-based, Turing-complete language for stochastic modeling. Since the inference scheme for this language is based on a variant of Pearl's loopy belief propagation algorithm, we call it Loopy Logic. Traditional Bayesian networks have limited expressive power, basically constrained to finite domains as in the propositional calculus. Our language contains variables that can capture general classes of situations, events and relationships. A first-order language is also able to reason about potentially infinite classes and situations using constructs such as hidden Markov models(HMMs). Our language uses an Expectation-Maximization (EM) type learning of parameters. This has a natural fit with the Loopy Belief Propagation used for inference since both can be viewed as iterative message passing algorithms. We present the syntax and theoretical foundations for our Loopy Logic language. We then demonstrate three examples of stochastic modeling and diagnosis that explore the representational power of the language. A mechanical fault detection example displays how Loopy Logic can model time-series processes using an HMM variant. A digital circuit example exhibits the probabilistic modeling capabilities, and finally, a parameter fitting example demonstrates the power for learning unknown stochastic values.

  13. Kernel-based regression of drift and diffusion coefficients of stochastic processes

    NASA Astrophysics Data System (ADS)

    Lamouroux, David; Lehnertz, Klaus

    2009-09-01

    To improve the estimation of drift and diffusion coefficients of stochastic processes in case of a limited amount of usable data due to e.g. non-stationarity of natural systems we suggest to use kernel-based instead of histogram-based regression. We propose a method for bandwidth selection and compare it to a widely used cross-validation method. Kernel-based regression reveals an enhanced ability to estimate drift and diffusion especially for a small amount of data. This allows one to improve resolvability of changes in complex dynamical systems as evidenced by an exemplary analysis of electroencephalographic data recorded from a human epileptic brain.

  14. Stochastic assessment of climate impacts on hydrology and geomorphology of semiarid headwater basins using a physically based model

    NASA Astrophysics Data System (ADS)

    Francipane, A.; Fatichi, S.; Ivanov, V. Y.; Noto, L. V.

    2015-03-01

    Hydrologic and geomorphic responses of watersheds to changes in climate are difficult to assess due to projection uncertainties and nonlinearity of the processes that are involved. Yet such assessments are increasingly needed and call for mechanistic approaches within a probabilistic framework. This study employs an integrated hydrology-geomorphology model, the Triangulated Irregular Network-based Real-time Integrated Basin Simulator (tRIBS)-Erosion, to analyze runoff and erosion sensitivity of seven semiarid headwater basins to projected climate conditions. The Advanced Weather Generator is used to produce two climate ensembles representative of the historic and future climate conditions for the Walnut Gulch Experimental Watershed located in the southwest U.S. The former ensemble incorporates the stochastic variability of the observed climate, while the latter includes the stochastic variability and the uncertainty of multimodel climate change projections. The ensembles are used as forcing for tRIBS-Erosion that simulates runoff and sediment basin responses leading to probabilistic inferences of future changes. The results show that annual precipitation for the area is generally expected to decrease in the future, with lower hourly intensities and similar daily rates. The smaller hourly rainfall generally results in lower mean annual runoff. However, a non-negligible probability of runoff increase in the future is identified, resulting from stochastic combinations of years with low and high runoff. On average, the magnitudes of mean and extreme events of sediment yield are expected to decrease with a very high probability. Importantly, the projected variability of annual sediment transport for the future conditions is comparable to that for the historic conditions, despite the fact that the former account for a much wider range of possible climate "alternatives." This result demonstrates that the historic natural climate variability of sediment yield is already so

  15. Scattering of polarized laser light by an atomic gas in free space: A quantum stochastic differential equation approach

    SciTech Connect

    Bouten, Luc; Stockton, John; Sarma, Gopal; Mabuchi, Hideo

    2007-05-15

    We propose a model, based on a quantum stochastic differential equation (QSDE), to describe the scattering of polarized laser light by an atomic gas. The gauge terms in the QSDE account for the direct scattering of the laser light into different field channels. Once the model has been set, we can rigorously derive quantum filtering equations for balanced polarimetry and homodyne detection experiments, study the statistics of output processes, and investigate a strong driving, weak coupling limit.

  16. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    PubMed Central

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  17. Stochastic Approach to Determine CO2 Hydrate Induction Time in Clay Mineral Suspensions

    NASA Astrophysics Data System (ADS)

    Lee, K.; Lee, S.; Lee, W.

    2008-12-01

    A large number of induction time data for carbon dioxide hydrate formation were obtained from a batch reactor consisting of four independent reaction cells. Using resistance temperature detector(RTD)s and a digital microscope, we successfully monitored the whole process of hydrate formation (i.e., nucleation and crystal growth) and detected the induction time. The experiments were carried out in kaolinite and montmorillonite suspensions at temperatures between 274 and 277 K and pressures ranging from 3.0 to 4.0 MPa. Each set of data was analyzed beforehand whether to be treated by stochastic manner or not. Geochemical factors potentially influencing the hydrate induction time under different experimental conditions were investigated by stochastic analyses. We observed that clay mineral type, pressure, and temperature significantly affect the stochastic behavior of the induction times for CO2 hydrate formation in this study. The hydrate formation kinetics along with stochastic analyses can provide basic understanding for CO2 hydrate storage in deep-sea sediment and geologic formation, securing its stability under the environments.

  18. Skull base approaches in neurosurgery

    PubMed Central

    2010-01-01

    The skull base surgery is one of the most demanding surgeries. There are different structures that can be injured easily, by operating in the skull base. It is very important for the neurosurgeon to choose the right approach in order to reach the lesion without harming the other intact structures. Due to the pioneering work of Cushing, Hirsch, Yasargil, Krause, Dandy and other dedicated neurosurgeons, it is possible to address the tumor and other lesions in the anterior, the mid-line and the posterior cranial base. With the transsphenoidal, the frontolateral, the pterional and the lateral suboccipital approach nearly every region of the skull base is exposable. In the current state many different skull base approaches are described for various neurosurgical diseases during the last 20 years. The selection of an approach may differ from country to country, e.g., in the United States orbitozygomaticotomy for special lesions of the anterior skull base or petrosectomy for clivus meningiomas, are found more frequently than in Europe. The reason for writing the review was the question: Are there keyhole approaches with which someone can deal with a vast variety of lesions in the neurosurgical field? In my opinion the different surgical approaches mentioned above cover almost 95% of all skull base tumors and lesions. In the following text these approaches will be described. These approaches are: 1) pterional approach 2) frontolateral approach 3) transsphenoidal approach 4) suboccipital lateral approach These approaches can be extended and combined with each other. In the following we want to enhance this philosophy. PMID:20602753

  19. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    SciTech Connect

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-04

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  20. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-01

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  1. A stochastic simulation framework for the prediction of strategic noise mapping and occupational noise exposure using the random walk approach.

    PubMed

    Han, Lim Ming; Haron, Zaiton; Yahya, Khairulzan; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri

    2015-01-01

    Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces. PMID:25875019

  2. A Stochastic Simulation Framework for the Prediction of Strategic Noise Mapping and Occupational Noise Exposure Using the Random Walk Approach

    PubMed Central

    Haron, Zaiton; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri

    2015-01-01

    Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces. PMID:25875019

  3. On the problem of stochastic experimental modal analysis based on multiple-excitation multiple-response data, part I: Dispersion analysis of continuous-time structural systems

    NASA Astrophysics Data System (ADS)

    Lee, J. E.; Fassois, S. D.

    1993-02-01

    Despite its importance and the undisputable significance of stochastic effects, the problem of multiple-excitation multiple-response experimental modal analysis has thus far been almost exclusively considered within a deterministic framework. In this paper a novel, comprehensive and effective stochastic approach, that, unlike alternative schemes, can operate on vibration displacement, velocity or acceleration data, is introduced. The proposed approach is capable of effectively dealing with noise-corrupted vibration data, while also being characterized by unique features that enable it to overcome major drawbacks of current modal analysis methods and achieve high performance characteristics by employing: (a) proper and mutually compatible force excitation signal type and stochastic model forms, (b) an estimation scheme that circumvents problems such as algorithmic instability, wrong convergence, and high computational complexity, while requiring no initial guess parameter values, (c) effective model structure estimation and model validation procedures, and, (d) appropriate model transformation, reduction and analysis procedures based on a novel dispersion analysis methodology. This dispersion analysis methodology is a physically meaningful way of assessing the relative importance of the estimated vibrational modes based on their contributions ("dispersions") to the vibration signal energy. The effects of modal cross-correlations are fully accounted for, physical interpretations are provided in both the correlation and spectral domains, and the phenomenon of negative dispersion modes is investigated and physically interpreted. The effectiveness of the proposed approach is finally verified via numerical and laboratory experiments, as well as comparisons with the classical frequency domain method and the deterministic eigensystem realization algorithm (ERA). The paper is divided into two parts: the proposed dispersion analysis methodology is introduced in the first one

  4. Suboptimal stochastic controller for an n-body spacecraft

    NASA Technical Reports Server (NTRS)

    Larson, V.

    1973-01-01

    The problem is studied of determining a stochastic optimal controller for an n-body spacecraft. The approach used in obtaining the stochastic controller involves the application, interpretation, and combination of advanced dynamical principles and the theoretical aspects of modern control theory. The stochastic controller obtained for a complicated model of a spacecraft uses sensor angular measurements associated with the base body to obtain smoothed estimates of the entire state vector, can be easily implemented, and enables system performance to be significantly improved.

  5. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  6. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418

  7. Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Vikram; Rojas, Cristian R.

    2014-12-01

    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.

  8. An empirically adjusted approach to reproductive number estimation for stochastic compartmental models: A case study of two Ebola outbreaks.

    PubMed

    Brown, Grant D; Oleson, Jacob J; Porter, Aaron T

    2016-06-01

    The various thresholding quantities grouped under the "Basic Reproductive Number" umbrella are often confused, but represent distinct approaches to estimating epidemic spread potential, and address different modeling needs. Here, we contrast several common reproduction measures applied to stochastic compartmental models, and introduce a new quantity dubbed the "empirically adjusted reproductive number" with several advantages. These include: more complete use of the underlying compartmental dynamics than common alternatives, use as a potential diagnostic tool to detect the presence and causes of intensity process underfitting, and the ability to provide timely feedback on disease spread. Conceptual connections between traditional reproduction measures and our approach are explored, and the behavior of our method is examined under simulation. Two illustrative examples are developed: First, the single location applications of our method are established using data from the 1995 Ebola outbreak in the Democratic Republic of the Congo and a traditional stochastic SEIR model. Second, a spatial formulation of this technique is explored in the context of the ongoing Ebola outbreak in West Africa with particular emphasis on potential use in model selection, diagnosis, and the resulting applications to estimation and prediction. Both analyses are placed in the context of a newly developed spatial analogue of the traditional SEIR modeling approach. PMID:26574727

  9. Placing stochastic simulation in a system-based context that promotes transparency and refutability

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Kavetski, D.; Clark, M. P.; Nolan, B. T.; Arabi, M.; Foglia, L.; Mehl, S.; Ye, M.

    2012-04-01

    Stochastic simulation often is used to evaluate the consequences of small-scale variation of selected system properties. In models of flow and transport models in the subsurface, often stochastic simulations focus on heterogeneity of the hydraulic conductivity field. This spatial variation exists within the context of larger-scale hydraulic conductivity variations and of other properties and boundary conditions that often, though perhaps erroneously, are represented at a larger scale. Understanding the small-scale stochastic variation in the context of the larger-scale properties becomes difficult when calibration, sensitivity analysis, and(or) uncertainty evaluation of the larger-scale properties require 1,000s to 100,000s of model runs. For example, multiobjective optimization, FAST, Markov-Chain Monte Carlo, and cross-validation all require many model runs. While all of these methods can be very useful, the high computational cost limits their applicability. An alternative is to consider computationally frugal local methods that often use 10s of 100s of highly parallelizable model runs, but these methods are often criticized because of their underlying assumptions related to weighting and linearity. The ability to obtain insight with so few model runs and the resulting opportunity to better understand the context within which detail is explored using stochastic methods is tempting, but only if the computationally frugal methods provide enough valuable insights. In this talk the problematic underlying assumptions are considered in the context of ideas about accounting for data error (including epistemic error) using error-based weighting and ideas about addressing model nonlinearity using robust models. Transparency is increased because measures of what is important to various objectives are available even for process models with lengthy execution times. Indeed, the ability to consider such models allows exploration of processes that would otherwise be

  10. Microbial Transport, Retention, and Inactivation in Streams: A Combined Experimental and Stochastic Modeling Approach.

    PubMed

    Drummond, Jennifer D; Davies-Colley, Robert J; Stott, Rebecca; Sukias, James P; Nagels, John W; Sharp, Alice; Packman, Aaron I

    2015-07-01

    Long-term survival of pathogenic microorganisms in streams enables long-distance disease transmission. In order to manage water-borne diseases more effectively we need to better predict how microbes behave in freshwater systems, particularly how they are transported downstream in rivers. Microbes continuously immobilize and resuspend during downstream transport owing to a variety of processes including gravitational settling, attachment to in-stream structures such as submerged macrophytes, and hyporheic exchange and filtration within underlying sediments. We developed a stochastic model to describe these microbial transport and retention processes in rivers that also accounts for microbial inactivation. We used the model to assess the transport, retention, and inactivation of Escherichia coli in a small stream and the underlying streambed sediments as measured from multitracer injection experiments. The results demonstrate that the combination of laboratory experiments on sediment cores, stream reach-scale tracer experiments, and multiscale stochastic modeling improves assessment of microbial transport in streams. This study (1) demonstrates new observations of microbial dynamics in streams with improved data quality than prior studies, (2) advances a stochastic modeling framework to include microbial inactivation processes that we observed to be important in these streams, and (3) synthesizes new and existing data to evaluate seasonal dynamics. PMID:26039244

  11. FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna

    2016-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  12. FEAMAC-CARES Software Coupling Development Effort for CMC Stochastic-Strength-Based Damage Simulation

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  13. Definition of scarcity-based water pricing policies through hydro-economic stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2014-05-01

    One of the greatest current issues in integrated water resources management is to find and apply efficient and flexible management policies. Efficient management is needed to deal with increased water scarcity and river basin closure. Flexible policies are required to handle the stochastic nature of the water cycle. Scarcity-based pricing policies are one of the most promising alternatives, which deal not only with the supply costs, but also consider the opportunity costs associated with the allocation of water. The opportunity cost of water, which varies dynamically with space and time according to the imbalances between supply and demand, can be assessed using hydro-economic models. This contribution presents a procedure to design a pricing policy based on hydro-economic modelling and on the assessment of the Marginal Resource Opportunity Cost (MROC). Firstly, MROC time series associated to the optimal operation of the system are derived from a stochastic hydro-economic model. Secondly, these MROC time series must be post-processed in order to combine the different space-and-time MROC values into a single generalized indicator of the marginal opportunity cost of water. Finally, step scarcity-based pricing policies are determined after establishing a relationship between the MROC and the corresponding state of the system at the beginning of the time period (month). The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series and four agricultural demand sites currently managed using historical (XIVth century) rights. A hydro-economic model of the system has been built using stochastic dynamic programming. A reoptimization procedure is then implemented using SDP-derived benefit-to-go functions and historical flows to produce the time series of MROC values. MROC values are then aggregated and a statistical analysis is carried out to define (i) pricing policies and (ii) the relationship between MROC and

  14. Development of a censored modelling approach for stochastic estimation of rainfall extremes at fine temporal scales

    NASA Astrophysics Data System (ADS)

    Cross, David; Onof, Christian; Bernardara, Pietro

    2016-04-01

    With the COP21 drawing to a close in December 2015, storms Desmond, Eva and Frank which swept across the UK and Ireland causing widespread flooding and devastation have acted as a timely reminder of the need for reliable estimation of rainfall extremes in a changing climate. The frequency and intensity of rainfall extremes are predicted to increase in the UK under anthropogenic climate change, and it is notable that the UK's 24 hour rainfall record of 316mm set in Seathwaite, Cumbria in 2009 was broken on the 5 December 2015 with 341mm by storm Desmond at Honister Pass also in Cumbria. Immediate analysis of the latter by the Centre for Ecology and Hydrology (UK) on the 8 December 2015 estimated that this is approximately equivalent to a 1300 year return period event (Centre for Ecology & Hydrology, 2015). Rainfall extremes are typically estimated using extreme value analysis and intensity duration frequency curves. This study investigates the potential for using stochastic rainfall simulation with mechanistic rectangular pulse models for estimation of extreme rainfall. These models have been used since the late 1980s to generate synthetic rainfall time-series at point locations for scenario analysis in hydrological studies and climate impact assessment at the catchment scale. Routinely they are calibrated to the full historical hyetograph and used for continuous simulation. However, their extremal performance is variable with a tendency to underestimate short duration (hourly and sub-hourly) rainfall extremes which are often associated with heavy convective rainfall in temporal climates such as the UK. Focussing on hourly and sub-hourly rainfall, a censored modelling approach is proposed in which rainfall below a low threshold is set to zero prior to model calibration. It is hypothesised that synthetic rainfall time-series are poor at estimating extremes because the majority of the training data are not representative of the climatic conditions which give rise to

  15. Stochastic Analysis of Waterhammer and Applications in Reliability-Based Structural Design for Hydro Turbine Penstocks

    SciTech Connect

    Zhang, Qin Fen; Karney, Professor Byran W.; Suo, Prof. Lisheng; Colombo, Dr. Andrew

    2011-01-01

    Abstract: The randomness of transient events, and the variability in factors which influence the magnitudes of resultant pressure fluctuations, ensures that waterhammer and surges in a pressurized pipe system are inherently stochastic. To bolster and improve reliability-based structural design, a stochastic model of transient pressures is developed for water conveyance systems in hydropower plants. The statistical characteristics and probability distributions of key factors in boundary conditions, initial states and hydraulic system parameters are analyzed based on a large record of observed data from hydro plants in China; and then the statistical characteristics and probability distributions of annual maximum waterhammer pressures are simulated using Monte Carlo method and verified by the analytical probabilistic model for a simplified pipe system. In addition, the characteristics (annual occurrence, sustaining period and probability distribution) of hydraulic loads for both steady and transient states are discussed. Illustrating with an example of penstock structural design, it is shown that the total waterhammer pressure should be split into two individual random variable loads: the steady/static pressure and the waterhammer pressure rise during transients; and that different partial load factors should be applied to each individual load to reflect its unique physical and stochastic features. Particularly, the normative load (usually the unfavorable value at 95-percentage point) for steady/static hydraulic pressure should be taken from the probability distribution of its maximum values during the pipe's design life, while for waterhammer pressure rise, as the second variable load, the probability distribution of its annual maximum values is used to determine its normative load.

  16. A coupled stochastic inverse/sharp interface seawater intrusion approach for coastal aquifers under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Merigó, José M.; Xu, Yejun

    2016-09-01

    This paper presents an alternative approach to deal with seawater intrusion problems, that overcomes some of the limitations of previous works, by coupling the well-known SWI2 package for MODFLOW with a stochastic inverse model named GC method. On the one hand, the SWI2 allows a vertically integrated variable-density groundwater flow and seawater intrusion in coastal multi-aquifer systems, and a reduction in number of required model cells and the elimination of the need to solve the advective-dispersive transport equation, which leads to substantial model run-time savings. On the other hand, the GC method allows dealing with groundwater parameter uncertainty by constraining stochastic simulations to flow and mass transport data (i.e., hydraulic conductivity, freshwater heads, saltwater concentrations and travel times) and also to secondary information obtained from expert judgment or geophysical surveys, thus reducing uncertainty and increasing reliability in meeting the environmental standards. The methodology has been successfully applied to a transient movement of the freshwater-seawater interface in response to changing freshwater inflow in a two-aquifer coastal aquifer system, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques. The approach also allows partially overcoming the neglected diffusion and dispersion processes after the conditioning process since the uncertainty is reduced and results are closer to available data.

  17. Neural network-based finite horizon stochastic optimal control design for nonlinear networked control systems.

    PubMed

    Xu, Hao; Jagannathan, Sarangapani

    2015-03-01

    The stochastic optimal control of nonlinear networked control systems (NNCSs) using neuro-dynamic programming (NDP) over a finite time horizon is a challenging problem due to terminal constraints, system uncertainties, and unknown network imperfections, such as network-induced delays and packet losses. Since the traditional iteration or time-based infinite horizon NDP schemes are unsuitable for NNCS with terminal constraints, a novel time-based NDP scheme is developed to solve finite horizon optimal control of NNCS by mitigating the above-mentioned challenges. First, an online neural network (NN) identifier is introduced to approximate the control coefficient matrix that is subsequently utilized in conjunction with the critic and actor NNs to determine a time-based stochastic optimal control input over finite horizon in a forward-in-time and online manner. Eventually, Lyapunov theory is used to show that all closed-loop signals and NN weights are uniformly ultimately bounded with ultimate bounds being a function of initial conditions and final time. Moreover, the approximated control input converges close to optimal value within finite time. The simulation results are included to show the effectiveness of the proposed scheme. PMID:25720004

  18. Importance of realistic phase-space representations of initial quantum fluctuations using the stochastic mean-field approach for fermions

    NASA Astrophysics Data System (ADS)

    Yilmaz, Bulent; Lacroix, Denis; Curebal, Resul

    2014-11-01

    In the stochastic mean-field (SMF) approach, an ensemble of initial values for a selected set of one-body observables is formed by stochastic sampling from a phase-space distribution that reproduces the initial quantum fluctuations. Independent mean-field evolutions are performed with each set of initial values followed by averaging over the resulting ensemble. This approach has been recently shown to be rather versatile and accurate in describing the correlated dynamics beyond the independent particle picture. In the original formulation of SMF, it was proposed to use a Gaussian assumption for the phase-space distribution. This assumption turns out to be rather effective when the dynamics of an initially uncorrelated state is considered, which was the case in all applications of this approach up to now. Using the Lipkin-Meshkov-Glick (LMG) model, we show that such an assumption might not be adequate if the quantum system under interest is initially correlated and presents configuration mixing between several Slater determinants. In this case, a more realistic description of the initial phase-space is necessary. We show that the SMF approach can be advantageously combined with standard methods to describe phase-space in quantum mechanics. As an illustration, the Husimi distribution function is used here to obtain a realistic representation of the phase-space of a quantum many-body system. This method greatly improves the description of initially correlated fermionic many-body states. In the LMG model, while the Gaussian approximation failed to describe these systems in all interaction strength range, the novel approach gives a perfect agreement with the exact evolution in the weak coupling regime and significantly improves the description of correlated systems in the strong coupling regime.

  19. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  20. A stochastic frontier approach to study the relationship between gastrointestinal nematode infections and technical efficiency of dairy farms.

    PubMed

    van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes

    2014-01-01

    The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and

  1. Discrete stability in stochastic programming

    SciTech Connect

    Lepp, R.

    1994-12-31

    In this lecture we study stability properties of stochastic programs with recourse where the probability measure is approximated by a sequence of weakly convergent discrete measures. Such discrete approximation approach gives us a possibility to analyze explicitly the behavior of the second stage correction function. The approach is based on modern functional analytical methods of an approximation of extremum problems in function spaces, especially on the notion of the discrete convergence of vectors to an essentially bounded measurable function.

  2. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    USGS Publications Warehouse

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5  Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  3. A two-stage approach for a multi-objective component assignment problem for a stochastic-flow network

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-03-01

    Many real-life systems, such as computer systems, manufacturing systems and logistics systems, are modelled as stochastic-flow networks (SFNs) to evaluate network reliability. Here, network reliability, defined as the probability that the network successfully transmits d units of data/commodity from an origin to a destination, is a performance indicator of the systems. Network reliability maximization is a particular objective, but is costly for many system supervisors. This article solves the multi-objective problem of reliability maximization and cost minimization by finding the optimal component assignment for SFN, in which a set of multi-state components is ready to be assigned to the network. A two-stage approach integrating Non-dominated Sorting Genetic Algorithm II and simple additive weighting are proposed to solve this problem, where network reliability is evaluated in terms of minimal paths and recursive sum of disjoint products. Several practical examples related to computer networks are utilized to demonstrate the proposed approach.

  4. A Stochastic Hill Climbing Approach for Simultaneous 2D Alignment and Clustering of Cryogenic Electron Microscopy Images.

    PubMed

    Reboul, Cyril F; Bonnet, Frederic; Elmlund, Dominika; Elmlund, Hans

    2016-06-01

    A critical step in the analysis of novel cryogenic electron microscopy (cryo-EM) single-particle datasets is the identification of homogeneous subsets of images. Methods for solving this problem are important for data quality assessment, ab initio 3D reconstruction, and analysis of population diversity due to the heterogeneous nature of macromolecules. Here we formulate a stochastic algorithm for identification of homogeneous subsets of images. The purpose of the method is to generate improved 2D class averages that can be used to produce a reliable 3D starting model in a rapid and unbiased fashion. We show that our method overcomes inherent limitations of widely used clustering approaches and proceed to test the approach on six publicly available experimental cryo-EM datasets. We conclude that, in each instance, ab initio 3D reconstructions of quality suitable for initialization of high-resolution refinement are produced from the cluster centers. PMID:27184214

  5. Combined Deterministic and Stochastic Approach to Determine Spatial Distribution of Drought Frequency and Duration in the Great Hungarian Plain

    NASA Astrophysics Data System (ADS)

    Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.

    2009-04-01

    Drought is one of the major weather driven natural hazards, which has most harm impacts on environment, agricultural and hydrological factors than the other hazards. In spite of the fact that Hungary - that country is situated in Central Europe - belongs to the continental climate zone (influenced by Atlantic and Mediterranean streams) and this weather conditions should be favourable for agricultural production, the drought is a serious risk factor in Hungary, especially on the so called "Great Hungarian Plain", which area has been hit by severe drought events. These drought events encouraged the Ministry of Environment and Water of Hungary to embark on a countrywide drought planning programme to coordinate drought planning efforts throughout the country, to ensure that available water is used efficiently and to provide guidance on how drought planning can be accomplished. With regard to this plan, it is indispensable to analyze the regional drought frequency and duration in the target region of the programme as fundamental information for the further works. According to these aims, first we initiated a methodological development for simulating drought in a non-contributing area. As a result of this work, it has been agreed that the most appropriate model structure for our purposes using a spatially distributed physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model embedded into a Markov Chain-Monte Carlo (MCMC) algorithm for estimate multi-year drought frequency and duration. In this framework: - the spatially distributed SVAT component simulates all the fundamental SVAT processes (such as: interception, snow-accumulation and melting, infiltration, water uptake by vegetation and evapotranspiration, vertical and horizontal distribution of soil moisture, etc.) taking the groundwater table as lower, and the hydrometeorological fields as upper boundary conditions into account; - and the MCMC based stochastic component generates time series of daily weather

  6. Partial derivative approach for option pricing in a simple stochastic volatility model

    NASA Astrophysics Data System (ADS)

    Montero, M.

    2004-11-01

    We study a market model in which the volatility of the stock may jump at a random time from a fixed value to another fixed value. This model has already been introduced in the literature. We present a new approach to the problem, based on partial differential equations, which gives a different perspective to the issue. Within our framework we can easily consider several forms for the market price of volatility risk, and interpret their financial meaning. We thus recover solutions previously mentioned in the literature as well as obtaining new ones.

  7. An Efficient Numerical Solution To The Stochastic Coagulation Equation Based On Set of Moments

    NASA Astrophysics Data System (ADS)

    Rodin, A. V.

    Stochastic coagulation equation (SCE) describes numerous processes of geophysi- cal and astrophysical interest, e.g. clouds and aerosol media. Although substantial progress is achieved in understanding microphysics of particular systems governed by the SCE, their interactive simulation in large-scale fluid dynamics models is hard to implement due to high computational cost demanded by calculation of the evolv- ing size distribution of interacting particles. On the other hand, in many cases SCE results in unimodal distributions of simple form, which macroscopic properties are determined by only few lower moments. Seeking for computationally efficient nu- merical scheme for ab initio simulation of the stochastic coagulation in large-scale models, a purely spectral conservative solution to the SCE based on a limited set of lower moments has been developed and tested against conventional gridpoint scheme. The method is based on offline averaging of a multidimensional quadratic function generated by the SCE over subset of test distributions constrained by moment rela- tionships. Effects of advection and sedimentation, as well as implications for the case of superlinear kernels leading to runaway coagulation, are discussed.

  8. Solution of stochastic media transport problems using a numerical quadrature-based method

    SciTech Connect

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.; Olson, A. J.

    2013-07-01

    We present a new conceptual framework for analyzing transport problems in random media. We decompose such problems into stratified subproblems according to the number of material pseudo-interfaces within realizations. For a given subproblem we assign pseudo-interface locations in each realization according to product quadrature rules, which allows us to deterministically generate a fixed number of realizations. Quadrature integration of the solutions of these realizations thus approximately solves each subproblem; the weighted superposition of solutions of the subproblems approximately solves the general stochastic media transport problem. We revisit some benchmark problems to determine the accuracy and efficiency of this approach in comparison to randomly generated realizations. We find that this method is very accurate and fast when the number of pseudo-interfaces in a problem is generally low, but that these advantages quickly degrade as the number of pseudo-interfaces increases. (authors)

  9. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  10. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approachstochastic approach≤probability predicted by Davis and Stoll < probability predicted by Martin et al. The differences are explained by the positive bias of the Martin equation and the lower average resolution observed for the isocratic simulations compared to the gradient simulations with the same peak capacity. When the stochastic results are applied to conventional HPLC and sequential elution liquid chromatography (SE-LC), the latter is shown to provide much greater probabilities of success for moderately complex samples (e.g., PHPLC=31.2% versus PSE-LC=69.1% for 12 components and the same analysis time). For a given number of components, the density of probability data provided over the range of peak capacities is sufficient to allow accurate interpolation of probabilities for peak capacities not reported, <1.5% error for saturation factors <0.20. Additional applications for the stochastic approach include isothermal and programmed-temperature gas chromatography. PMID:27286646

  11. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  12. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    DOE PAGESBeta

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    2016-01-01

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less

  13. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    SciTech Connect

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    2016-01-01

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total cost of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.

  14. Correlated noise-based switches and stochastic resonance in a bistable genetic regulation system

    NASA Astrophysics Data System (ADS)

    Wang, Can-Jun; Yang, Ke-Li

    2016-07-01

    The correlated noise-based switches and stochastic resonance are investigated in a bistable single gene switching system driven by an additive noise (environmental fluctuations), a multiplicative noise (fluctuations of the degradation rate). The correlation between the two noise sources originates from on the lysis-lysogeny pathway system of the λ phage. The steady state probability distribution is obtained by solving the time-independent Fokker-Planck equation, and the effects of noises are analyzed. The effects of noises on the switching time between the two stable states (mean first passage time) is investigated by the numerical simulation. The stochastic resonance phenomenon is analyzed by the power amplification factor. The results show that the multiplicative noise can induce the switching from "on" → "off" of the protein production, while the additive noise and the correlation between the noise sources can induce the inverse switching "off" → "on". A nonmonotonic behaviour of the average switching time versus the multiplicative noise intensity, for different cross-correlation and additive noise intensities, is observed in the genetic system. There exist optimal values of the additive noise, multiplicative noise and cross-correlation intensities for which the weak signal can be optimal amplified.

  15. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  16. The transition between strong and weak chaos in delay systems: Stochastic modeling approach.

    PubMed

    Jüngling, Thomas; D'Huys, Otti; Kinzel, Wolfgang

    2015-06-01

    We investigate the scaling behavior of the maximal Lyapunov exponent in chaotic systems with time delay. In the large-delay limit, it is known that one can distinguish between strong and weak chaos depending on the delay scaling, analogously to strong and weak instabilities for steady states and periodic orbits. Here we show that the Lyapunov exponent of chaotic systems shows significant differences in its scaling behavior compared to constant or periodic dynamics due to fluctuations in the linearized equations of motion. We reproduce the chaotic scaling properties with a linear delay system with multiplicative noise. We further derive analytic limit cases for the stochastic model illustrating the mechanisms of the emerging scaling laws. PMID:26172783

  17. Optimal performance of the tryptophan operon of E. coli: a stochastic, dynamical, mathematical-modeling approach.

    PubMed

    Salazar-Cavazos, Emanuel; Santillán, Moisés

    2014-02-01

    In this work, we develop a detailed, stochastic, dynamical model for the tryptophan operon of E. coli, and estimate all of the model parameters from reported experimental data. We further employ the model to study the system performance, considering the amount of biochemical noise in the trp level, the system rise time after a nutritional shift, and the amount of repressor molecules necessary to maintain an adequate level of repression, as indicators of the system performance regime. We demonstrate that the level of cooperativity between repressor molecules bound to the first two operators in the trp promoter affects all of the above enlisted performance characteristics. Moreover, the cooperativity level found in the wild-type bacterial strain optimizes a cost-benefit function involving low biochemical noise in the tryptophan level, short rise time after a nutritional shift, and low number of regulatory molecules. PMID:24307084

  18. The transition between strong and weak chaos in delay systems: Stochastic modeling approach

    NASA Astrophysics Data System (ADS)

    Jüngling, Thomas; D'Huys, Otti; Kinzel, Wolfgang

    2015-06-01

    We investigate the scaling behavior of the maximal Lyapunov exponent in chaotic systems with time delay. In the large-delay limit, it is known that one can distinguish between strong and weak chaos depending on the delay scaling, analogously to strong and weak instabilities for steady states and periodic orbits. Here we show that the Lyapunov exponent of chaotic systems shows significant differences in its scaling behavior compared to constant or periodic dynamics due to fluctuations in the linearized equations of motion. We reproduce the chaotic scaling properties with a linear delay system with multiplicative noise. We further derive analytic limit cases for the stochastic model illustrating the mechanisms of the emerging scaling laws.

  19. Stochastic approach to the generalized Schrödinger equation: A method of eigenfunction expansion.

    PubMed

    Tsuchida, Satoshi; Kuratsuji, Hiroshi

    2015-05-01

    Using a method of eigenfunction expansion, a stochastic equation is developed for the generalized Schrödinger equation with random fluctuations. The wave field ψ is expanded in terms of eigenfunctions: ψ=∑(n)a(n)(t)ϕ(n)(x), with ϕ(n) being the eigenfunction that satisfies the eigenvalue equation H(0)ϕ(n)=λ(n)ϕ(n), where H(0) is the reference "Hamiltonian" conventionally called the "unperturbed" Hamiltonian. The Langevin equation is derived for the expansion coefficient a(n)(t), and it is converted to the Fokker-Planck (FP) equation for a set {a(n)} under the assumption of Gaussian white noise for the fluctuation. This procedure is carried out by a functional integral, in which the functional Jacobian plays a crucial role in determining the form of the FP equation. The analyses are given for the FP equation by adopting several approximate schemes. PMID:26066158

  20. The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

    PubMed Central

    Jiang, Kuosheng.; Xu, Guanghua.; Liang, Lin.; Tao, Tangfei.; Gu, Fengshou.

    2014-01-01

    In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test. PMID:25076220

  1. Stochastic Dynamical Model of a Growing Citation Network Based on a Self-Exciting Point Process

    NASA Astrophysics Data System (ADS)

    Golosovsky, Michael; Solomon, Sorin

    2012-08-01

    We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40 195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.

  2. Exponential Synchronization of Coupled Stochastic Memristor-Based Neural Networks With Time-Varying Probabilistic Delay Coupling and Impulsive Delay.

    PubMed

    Bao, Haibo; Park, Ju H; Cao, Jinde

    2016-01-01

    This paper deals with the exponential synchronization of coupled stochastic memristor-based neural networks with probabilistic time-varying delay coupling and time-varying impulsive delay. There is one probabilistic transmittal delay in the delayed coupling that is translated by a Bernoulli stochastic variable satisfying a conditional probability distribution. The disturbance is described by a Wiener process. Based on Lyapunov functions, Halanay inequality, and linear matrix inequalities, sufficient conditions that depend on the probability distribution of the delay coupling and the impulsive delay were obtained. Numerical simulations are used to show the effectiveness of the theoretical results. PMID:26485723

  3. Stochastic bias correction of dynamically downscaled precipitation fields for Germany through copula-based integration of gridded observation data

    NASA Astrophysics Data System (ADS)

    Mao, G.; Vogl, S.; Laux, P.; Wagner, S.; Kunstmann, H.

    2014-07-01

    Dynamically downscaled precipitation fields from regional climate model (RCM) often cannot be used directly for local climate change impact studies. Due to their inherent biases, i.e. systematic over- or underestimations compared to observations, several correction approaches have been developed. Most of the bias correction procedures such as the quantile mapping approach employ a transfer function that based on the statistical differences between RCM output and observations. Apart from such transfer function based statistical correction algorithms, a stochastic bias correction technique, based on the concept of Copula theory, is developed here and applied to correct precipitation fields from the Weather Research and Forecasting (WRF) model. As Dynamically downscaled precipitation fields we used high resolution (7 km, daily) WRF simulations for Germany driven by ERA40 reanalysis data for 1971-2000. The REGNIE data set from Germany Weather Service is used as gridded observation data (1 km, daily) and rescaled to 7 km for this application. The 30 year time series are splitted into a calibration (1971-1985) and validation (1986-2000) period of equal length. Based on the estimated dependence structure between WRF and REGNIE data and the identified respective marginal distributions in calibration period, separately analyzed for the different seasons, conditional distribution functions are derived for each time step in validation period. This finally allows to get additional information about the range of the statistically possible bias corrected values. The results show that the Copula-based approach efficiently corrects most of the errors in WRF derived precipitation for all seasons. It is also found that the Copula-based correction performs better for wet bias correction than for dry bias correction. In autumn and winter, the correction introduced a small dry bias in the Northwest of Germany. The average relative bias of daily mean precipitation from WRF for the

  4. A Bayesian Approach for Evaluation of Determinants of Health System Efficiency Using Stochastic Frontier Analysis and Beta Regression

    PubMed Central

    2016-01-01

    In today's world, Public expenditures on health are one of the most important issues for governments. These increased expenditures are putting pressure on public budgets. Therefore, health policy makers have focused on the performance of their health systems and many countries have introduced reforms to improve the performance of their health systems. This study investigates the most important determinants of healthcare efficiency for OECD countries using second stage approach for Bayesian Stochastic Frontier Analysis (BSFA). There are two steps in this study. First we measure 29 OECD countries' healthcare efficiency by BSFA using the data from the OECD Health Database. At second stage, we expose the multiple relationships between the healthcare efficiency and characteristics of healthcare systems across OECD countries using Bayesian beta regression. PMID:27118987

  5. A Bayesian Approach for Evaluation of Determinants of Health System Efficiency Using Stochastic Frontier Analysis and Beta Regression.

    PubMed

    Şenel, Talat; Cengiz, Mehmet Ali

    2016-01-01

    In today's world, Public expenditures on health are one of the most important issues for governments. These increased expenditures are putting pressure on public budgets. Therefore, health policy makers have focused on the performance of their health systems and many countries have introduced reforms to improve the performance of their health systems. This study investigates the most important determinants of healthcare efficiency for OECD countries using second stage approach for Bayesian Stochastic Frontier Analysis (BSFA). There are two steps in this study. First we measure 29 OECD countries' healthcare efficiency by BSFA using the data from the OECD Health Database. At second stage, we expose the multiple relationships between the healthcare efficiency and characteristics of healthcare systems across OECD countries using Bayesian beta regression. PMID:27118987

  6. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  7. A Bloch decomposition-based stochastic Galerkin method for quantum dynamics with a random external potential

    NASA Astrophysics Data System (ADS)

    Wu, Zhizhang; Huang, Zhongyi

    2016-07-01

    In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.

  8. Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables.

    PubMed

    Pugno, Nicola M; Bosia, Federico; Carpinteri, Alberto

    2008-08-01

    Thousands of multiscale stochastic simulations are carried out in order to perform the first in-silico tensile tests of carbon nanotube (CNT)-based macroscopic cables with varying length. The longest treated cable is the space-elevator megacable but more realistic shorter cables are also considered in this bottom-up investigation. Different sizes, shapes, and concentrations of defects are simulated, resulting in cable macrostrengths not larger than approximately 10 GPa, which is much smaller than the theoretical nanotube strength (approximately 100 GPa). No best-fit parameters are present in the multiscale simulations: the input at level 1 is directly estimated from nanotensile tests of CNTs, whereas its output is considered as the input for the level 2, and so on up to level 5, corresponding to the megacable. Thus, five hierarchical levels are used to span lengths from that of a single nanotube (approximately 100 nm) to that of the space-elevator megacable (approximately 100 Mm). PMID:18666164

  9. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.

    PubMed

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-01-01

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS. PMID:27322281

  10. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid

    PubMed Central

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-01-01

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO2 emissions and operation costs in UCS and LCS. PMID:27322281

  11. Modeling stochasticity in biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Constantino, P. H.; Vlysidis, M.; Smadbeck, P.; Kaznessis, Y. N.

    2016-03-01

    Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts.

  12. Adaptive stochastic cellular automata: Applications

    NASA Astrophysics Data System (ADS)

    Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.

    1990-09-01

    The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.

  13. Stochastic Set-Based Particle Swarm Optimization Based on Local Exploration for Solving the Carpool Service Problem.

    PubMed

    Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia

    2016-08-01

    The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP. PMID:26890944

  14. Stochastic approach to diffusion inside the chaotic layer of a resonance.

    PubMed

    Mestre, Martín F; Bazzani, Armando; Cincotta, Pablo M; Giordano, Claudia M

    2014-01-01

    We model chaotic diffusion in a symplectic four-dimensional (4D) map by using the result of a theorem that was developed for stochastically perturbed integrable Hamiltonian systems. We explicitly consider a map defined by a free rotator (FR) coupled to a standard map (SM). We focus on the diffusion process in the action I of the FR, obtaining a seminumerical method to compute the diffusion coefficient. We study two cases corresponding to a thick and a thin chaotic layer in the SM phase space and we discuss a related conjecture stated in the past. In the first case, the numerically computed probability density function for the action I is well interpolated by the solution of a Fokker-Planck (FP) equation, whereas it presents a nonconstant time shift with respect to the concomitant FP solution in the second case suggesting the presence of an anomalous diffusion time scale. The explicit calculation of a diffusion coefficient for a 4D symplectic map can be useful to understand the slow diffusion observed in celestial mechanics and accelerator physics. PMID:24580301

  15. Combined reflectance spectroscopy and stochastic modeling approach for noninvasive hemoglobin determination via palpebral conjunctiva

    PubMed Central

    Kim, Oleg; McMurdy, John; Jay, Gregory; Lines, Collin; Crawford, Gregory; Alber, Mark

    2014-01-01

    Abstract A combination of stochastic photon propagation model in a multilayered human eyelid tissue and reflectance spectroscopy was used to study palpebral conjunctiva spectral reflectance for hemoglobin (Hgb) determination. The developed model is the first biologically relevant model of eyelid tissue, which was shown to provide very good approximation to the measured spectra. Tissue optical parameters were defined using previous histological and microscopy studies of a human eyelid. After calibration of the model parameters the responses of reflectance spectra to Hgb level and blood oxygenation variations were calculated. The stimulated reflectance spectra in adults with normal and low Hgb levels agreed well with experimental data for Hgb concentrations from 8.1 to 16.7 g/dL. The extracted Hgb levels were compared with in vitro Hgb measurements. The root mean square error of cross‐validation was 1.64 g/dL. The method was shown to provide 86% sensitivity estimates for clinically diagnosed anemia cases. A combination of the model with spectroscopy measurements provides a new tool for noninvasive study of human conjunctiva to aid in diagnosing blood disorders such as anemia. PMID:24744871

  16. PARTICLE ACCELERATION AT THE HELIOSPHERIC TERMINATION SHOCK WITH A STOCHASTIC SHOCK OBLIQUITY APPROACH

    SciTech Connect

    Arthur, Aaron D.; Le Roux, Jakobus A.

    2013-08-01

    Observations by the plasma and magnetic field instruments on board the Voyager 2 spacecraft suggest that the termination shock is weak with a compression ratio of {approx}2. However, this is contrary to the observations of accelerated particle spectra at the termination shock, where standard diffusive shock acceleration theory predicts a compression ratio closer to {approx}2.9. Using our focused transport model, we investigate pickup proton acceleration at a stationary spherical termination shock with a moderately strong compression ratio of 2.8 to include both the subshock and precursor. We show that for the particle energies observed by the Voyager 2 Low Energy Charged Particle (LECP) instrument, pickup protons will have effective length scales of diffusion that are larger than the combined subshock and precursor termination shock structure observed. As a result, the particles will experience a total effective termination shock compression ratio that is larger than values inferred by the plasma and magnetic field instruments for the subshock and similar to the value predicted by diffusive shock acceleration theory. Furthermore, using a stochastically varying magnetic field angle, we are able to qualitatively reproduce the multiple power-law structure observed for the LECP spectra downstream of the termination shock.

  17. A stochastic approach to uncertainty in the equations of MHD kinematics

    NASA Astrophysics Data System (ADS)

    Phillips, Edward G.; Elman, Howard C.

    2015-03-01

    The magnetohydrodynamic (MHD) kinematics model describes the electromagnetic behavior of an electrically conducting fluid when its hydrodynamic properties are assumed to be known. In particular, the MHD kinematics equations can be used to simulate the magnetic field induced by a given velocity field. While prescribing the velocity field leads to a simpler model than the fully coupled MHD system, this may introduce some epistemic uncertainty into the model. If the velocity of a physical system is not known with certainty, the magnetic field obtained from the model may not be reflective of the magnetic field seen in experiments. Additionally, uncertainty in physical parameters such as the magnetic resistivity may affect the reliability of predictions obtained from this model. By modeling the velocity and the resistivity as random variables in the MHD kinematics model, we seek to quantify the effects of uncertainty in these fields on the induced magnetic field. We develop stochastic expressions for these quantities and investigate their impact within a finite element discretization of the kinematics equations. We obtain mean and variance data through Monte Carlo simulation for several test problems. Toward this end, we develop and test an efficient block preconditioner for the linear systems arising from the discretized equations.

  18. A stochastic-dynamical approach to the study of the natural variability of the climate

    NASA Technical Reports Server (NTRS)

    Straus, D. M.; Halem, M.

    1981-01-01

    A method, suggested by Leith (1975), which employed stochastic-dynamic forecasts obtained from a general circulation model in such a way as to satisfy the definition of climatic noise, was used to validate assumptions accounting for the effects of external influences in estimating the climatic noise. Two assumptions were investigated: (1) that the weather fluctuations can be represented as a Markov process, and (2) that changing external conditions do not influence the atmosphere's statistical properties on short time scales. The general circulation model's simulation of the daily weather fluctuations was generated by performing integrations with prescribed climatological boundary conditions for random initial atmospheric states, with resulting dynamical forecasts providing an ensemble of simulated data for the autoregressive modeling of weather fluctuations. To estimate the climatic noise from the observational data (consisting of hourly values of sea level pressure and surface temperature at 54 U.S. stations for the month of January for the years 1949-1975) use of the short time-scale assumption is made. The simulated and observed data were found not to be consistent with either white noise or a Markov process of weather fluctuations. Good agreement was found between the results of the hypothetical testing of the simulated and the observed surface temperatures; and only partial support was found for the short time-scale assumption, i.e., for sea level pressure.

  19. Advection-condensation of water vapor with coherent stirring: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Tsang, Yue-Kin; Vanneste, Jacques; Vallis, Geoffrey

    2015-11-01

    The dynamics of atmospheric water is an essential ingredient of weather and climate. Water vapor, in particular, is an important greenhouse gas whose distribution has a strong impact on climate. To gain insight into the factors controlling the distribution of atmospheric moisture, we study an advection-condensation model in which water vapor is passively advected by a prescribed velocity and condensation acts as a sink that maintains the specific humidity below a prescribed, spatially dependent saturation value. The velocity consists of two parts: a single vortex representing large-scale coherent flow (e.g. the Hadley cell) and a white noise component mimicking small-scale turbulence. Steady-state is achieved in the presence of a moisture source at a boundary. We formulate this model as a set of stochastic differential equations. In the fast advection limit, analytical expression for the water vapor distribution is obtained by matched asymptotics. This allows us to make various predictions including the dependence of total precipitation on the vortex strength. These analytical results are verified by Monte Carlo simulations. This work is supported by the UK EPSRC Grant EP/I028072/1 and the Feasibility Fund from the UK EPSRC Network ReCoVER.

  20. A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao

    A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.

  1. Quantifying rock's structural fabric: a multi-scale hierarchical approach to natural fracture systems and stochastic modelling

    NASA Astrophysics Data System (ADS)

    Hardebol, Nico; Bertotti, Giovanni; Weltje, Gert Jan

    2014-05-01

    We propose the description of fracture-fault systems in terms of a multi-scale hierarchical network. In most generic form, such arrangement is referred to as a structural fabric and applicable across the length scale spectrum. The statistical characterisation combines the fracture length and orientation distributions and intersection-termination relationships. The aim is a parameterised description of the network that serves as input in stochastic network simulations that should reproduce the essence of natural fracture networks and encompass its variability. The quality of the stochastically generated fabric is determined by comparison with deterministic descriptions on which the model parameterisation is based. Both the deterministic and stochastic derived fracture network description can serve as input in fluid flow or mechanical simulations that accounts explicitly for the discrete features and the response of the system can be compared. The deterministic description of our current study in the framework of tight gas reservoirs is obtained from coastal pavements that expose a horizontal slice through a fracture-fault network system in fine grained sediments in Yorkshire, UK. Fracture hierarchies have often been described at one observation scale as a two-tier hierarchy in terms of 1st order systematic joints and 2nd order cross-joints. New in our description is the bridging between km-sized faults with notable displacement down to sub-meter scale shear and opening mode fractures. This study utilized a drone to obtain cm-resolution imagery of pavements from ~30m altitude and the large coverage up to 1-km by flying at a ~80m. This unique set of images forms the basis for the digitizing of the fracture-fault pattern and helped determining the nested nature of the network as well as intersection and abutment relationships. Fracture sets were defined from the highest to lowest hierarchical order and probability density functions were defined for the length

  2. Beyond the SCS curve number: A new stochastic spatial runoff approach

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  3. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  4. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  5. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2015-12-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  6. Microscale characterisation of stochastically reconstructed carbon fiber-based Gas Diffusion Layers; effects of anisotropy and resin content

    NASA Astrophysics Data System (ADS)

    Yiotis, Andreas G.; Kainourgiakis, Michael E.; Charalambopoulou, Georgia C.; Stubos, Athanassios K.

    2016-07-01

    A novel process-based methodology is proposed for the stochastic reconstruction and accurate characterisation of Carbon fiber-based matrices, which are commonly used as Gas Diffusion Layers in Proton Exchange Membrane Fuel Cells. The modeling approach is efficiently complementing standard methods used for the description of the anisotropic deposition of carbon fibers, with a rigorous model simulating the spatial distribution of the graphitized resin that is typically used to enhance the structural properties and thermal/electrical conductivities of the composite Gas Diffusion Layer materials. The model uses as input typical pore and continuum scale properties (average porosity, fiber diameter, resin content and anisotropy) of such composites, which are obtained from X-ray computed microtomography measurements on commercially available carbon papers. This information is then used for the digital reconstruction of realistic composite fibrous matrices. By solving the corresponding conservation equations at the microscale in the obtained digital domains, their effective transport properties, such as Darcy permeabilities, effective diffusivities, thermal/electrical conductivities and void tortuosity, are determined focusing primarily on the effects of medium anisotropy and resin content. The calculated properties are matching very well with those of Toray carbon papers for reasonable values of the model parameters that control the anisotropy of the fibrous skeleton and the materials resin content.

  7. An Individual-Based Diploid Model Predicts Limited Conditions Under Which Stochastic Gene Expression Becomes Advantageous

    PubMed Central

    Matsumoto, Tomotaka; Mineta, Katsuhiko; Osada, Naoki; Araki, Hitoshi

    2015-01-01

    Recent studies suggest the existence of a stochasticity in gene expression (SGE) in many organisms, and its non-negligible effect on their phenotype and fitness. To date, however, how SGE affects the key parameters of population genetics are not well understood. SGE can increase the phenotypic variation and act as a load for individuals, if they are at the adaptive optimum in a stable environment. On the other hand, part of the phenotypic variation caused by SGE might become advantageous if individuals at the adaptive optimum become genetically less-adaptive, for example due to an environmental change. Furthermore, SGE of unimportant genes might have little or no fitness consequences. Thus, SGE can be advantageous, disadvantageous, or selectively neutral depending on its context. In addition, there might be a genetic basis that regulates magnitude of SGE, which is often referred to as “modifier genes,” but little is known about the conditions under which such an SGE-modifier gene evolves. In the present study, we conducted individual-based computer simulations to examine these conditions in a diploid model. In the simulations, we considered a single locus that determines organismal fitness for simplicity, and that SGE on the locus creates fitness variation in a stochastic manner. We also considered another locus that modifies the magnitude of SGE. Our results suggested that SGE was always deleterious in stable environments and increased the fixation probability of deleterious mutations in this model. Even under frequently changing environmental conditions, only very strong natural selection made SGE adaptive. These results suggest that the evolution of SGE-modifier genes requires strict balance among the strength of natural selection, magnitude of SGE, and frequency of environmental changes. However, the degree of dominance affected the condition under which SGE becomes advantageous, indicating a better opportunity for the evolution of SGE in different genetic

  8. A stochastic chemical dynamic approach to correlate autoimmunity and optimal vitamin-D range.

    PubMed

    Roy, Susmita; Shrinivas, Krishna; Bagchi, Biman

    2014-01-01

    Motivated by several recent experimental observations that vitamin-D could interact with antigen presenting cells (APCs) and T-lymphocyte cells (T-cells) to promote and to regulate different stages of immune response, we developed a coarse grained but general kinetic model in an attempt to capture the role of vitamin-D in immunomodulatory responses. Our kinetic model, developed using the ideas of chemical network theory, leads to a system of nine coupled equations that we solve both by direct and by stochastic (Gillespie) methods. Both the analyses consistently provide detail information on the dependence of immune response to the variation of critical rate parameters. We find that although vitamin-D plays a negligible role in the initial immune response, it exerts a profound influence in the long term, especially in helping the system to achieve a new, stable steady state. The study explores the role of vitamin-D in preserving an observed bistability in the phase diagram (spanned by system parameters) of immune regulation, thus allowing the response to tolerate a wide range of pathogenic stimulation which could help in resisting autoimmune diseases. We also study how vitamin-D affects the time dependent population of dendritic cells that connect between innate and adaptive immune responses. Variations in dose dependent response of anti-inflammatory and pro-inflammatory T-cell populations to vitamin-D correlate well with recent experimental results. Our kinetic model allows for an estimation of the range of optimum level of vitamin-D required for smooth functioning of the immune system and for control of both hyper-regulation and inflammation. Most importantly, the present study reveals that an overdose or toxic level of vitamin-D or any steroid analogue could give rise to too large a tolerant response, leading to an inefficacy in adaptive immune function. PMID:24971516

  9. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  10. A Markovian event-based framework for stochastic spiking neural networks.

    PubMed

    Touboul, Jonathan D; Faugeras, Olivier D

    2011-11-01

    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks. PMID:21499739

  11. A Galerkin-based formulation of the probability density evolution method for general stochastic finite element systems

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Vissarion; Kalogeris, Ioannis

    2016-05-01

    The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.

  12. Ocean acoustic signal processing: A model-based approach

    SciTech Connect

    Candy, J.V. ); Sullivan, E.J. )

    1992-12-01

    A model-based approach is proposed to solve the ocean acoustic signal processing problem that is based on a state-space representation of the normal-mode propagation model. It is shown that this representation can be utilized to spatially propagate both modal (depth) and range functions given the basic parameters (wave numbers, etc.) developed from the solution of the associated boundary value problem. This model is then generalized to the stochastic case where an approximate Gauss--Markov model evolves. The Gauss--Markov representation, in principle, allows the inclusion of stochastic phenomena such as noise and modeling errors in a consistent manner. Based on this framework, investigations are made of model-based solutions to the signal enhancement, detection and related parameter estimation problems. In particular, a modal/pressure field processor is designed that allows {ital in} {ital situ} recursive estimation of the sound velocity profile. Finally, it is shown that the associated residual or so-called innovation sequence that ensues from the recursive nature of this formulation can be employed to monitor the model's fit to the data and also form the basis of a sequential detector.

  13. Combined Deterministic and Stochastic Approach to Determine Spatial Distribution of Drought Frequency and Duration in the Great Hungarian Plain

    NASA Astrophysics Data System (ADS)

    Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.

    2009-04-01

    Drought is one of the major weather driven natural hazards, which has most harm impacts on environment, agricultural and hydrological factors than the other hazards. In spite of the fact that Hungary - that country is situated in Central Europe - belongs to the continental climate zone (influenced by Atlantic and Mediterranean streams) and this weather conditions should be favourable for agricultural production, the drought is a serious risk factor in Hungary, especially on the so called "Great Hungarian Plain", which area has been hit by severe drought events. These drought events encouraged the Ministry of Environment and Water of Hungary to embark on a countrywide drought planning programme to coordinate drought planning efforts throughout the country, to ensure that available water is used efficiently and to provide guidance on how drought planning can be accomplished. With regard to this plan, it is indispensable to analyze the regional drought frequency and duration in the target region of the programme as fundamental information for the further works. According to these aims, first we initiated a methodological development for simulating drought in a non-contributing area. As a result of this work, it has been agreed that the most appropriate model structure for our purposes using a spatially distributed physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model embedded into a Markov Chain-Monte Carlo (MCMC) algorithm for estimate multi-year drought frequency and duration. In this framework: - the spatially distributed SVAT component simulates all the fundamental SVAT processes (such as: interception, snow-accumulation and melting, infiltration, water uptake by vegetation and evapotranspiration, vertical and horizontal distribution of soil moisture, etc.) taking the groundwater table as lower, and the hydrometeorological fields as upper boundary conditions into account; - and the MCMC based stochastic component generates time series of daily weather

  14. Identifying influential nodes in dynamic social networks based on degree-corrected stochastic block model

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun

    2016-05-01

    Many real-world data can be represented as dynamic networks which are the evolutionary networks with timestamps. Analyzing dynamic attributes is important to understanding the structures and functions of these complex networks. Especially, studying the influential nodes is significant to exploring and analyzing networks. In this paper, we propose a method to identify influential nodes in dynamic social networks based on identifying such nodes in the temporal communities which make up the dynamic networks. Firstly, we detect the community structures of all the snapshot networks based on the degree-corrected stochastic block model (DCBM). After getting the community structures, we capture the evolution of every community in the dynamic network by the extended Jaccard’s coefficient which is defined to map communities among all the snapshot networks. Then we obtain the initial influential nodes of the dynamic network and aggregate them based on three widely used centrality metrics. Experiments on real-world and synthetic datasets demonstrate that our method can identify influential nodes in dynamic networks accurately, at the same time, we also find some interesting phenomena and conclusions for those that have been validated in complex network or social science.

  15. Definition of efficient scarcity-based water pricing policies through stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.

    2015-01-01

    Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares river basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.

  16. Definition of efficient scarcity-based water pricing policies through stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.

    2015-09-01

    Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares River basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.

  17. Stochastic Models of Human Growth.

    ERIC Educational Resources Information Center

    Goodrich, Robert L.

    Stochastic difference equations of the Box-Jenkins form provide an adequate family of models on which to base the stochastic theory of human growth processes, but conventional time series identification methods do not apply to available data sets. A method to identify structure and parameters of stochastic difference equation models of human…

  18. A stochastic multi-symplectic scheme for stochastic Maxwell equations with additive noise

    SciTech Connect

    Hong, Jialin; Zhang, Liying

    2014-07-01

    In this paper we investigate a stochastic multi-symplectic method for stochastic Maxwell equations with additive noise. Based on the stochastic version of variational principle, we find a way to obtain the stochastic multi-symplectic structure of three-dimensional (3-D) stochastic Maxwell equations with additive noise. We propose a stochastic multi-symplectic scheme and show that it preserves the stochastic multi-symplectic conservation law and the local and global stochastic energy dissipative properties, which the equations themselves possess. Numerical experiments are performed to verify the numerical behaviors of the stochastic multi-symplectic scheme.

  19. Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior

    NASA Astrophysics Data System (ADS)

    Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui

    2003-08-01

    In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.

  20. Stochastic contraction-based observer and controller design algorithm with application to a flight vehicle

    NASA Astrophysics Data System (ADS)

    Mohamed, Majeed; Narayan Kar, Indra

    2015-11-01

    This paper focuses on a stochastic version of contraction theory to construct observer-controller structure for a flight dynamic system with noisy velocity measurement. A nonlinear stochastic observer is designed to estimate the pitch rate, the pitch angle, and the velocity of an aircraft example model using stochastic contraction theory. Estimated states are used to compute feedback control for solving a tracking problem. The structure and gain selection of the observer is carried out using Itô's stochastic differential equations and the contraction theory. The contraction property of integrated observer-controller structure is derived to ensure the exponential convergence of the trajectories of closed-loop nonlinear system. The upper bound of the state estimation error is explicitly derived and the efficacy of the proposed observer-controller structure has been shown through the numerical simulations.

  1. Detailed characterization of a fractured limestone formation using stochastic inverse approaches

    SciTech Connect

    Gupta, A.D.; Vasco, D.W.; Long, J.C.S.

    1994-07-01

    We discuss here two inverse approaches to construction of fracture flow models and their application in characterizing a fractured limestone formation. The first approach creates ``equivalent discontinuum`` models that conceptualize the fracture system as a partially filled lattice of conductors which are locally connected or disconnected to reproduce the observed hydrologic behavior. An alternative approach viz. ``variable aperture lattice`` models represent the fracture system as a fully filled network composed of conductors of varying apertures. The fracture apertures are sampled from a specified distribution, usually log-normal consistent with field data. The spatial arrangement of apertures is altered through inverse modeling so as to fit the available hydrologic data. Unlike traditional fracture network approaches which rely on fracture geometry to reproduce flow and transport behavior, the inverse methods directly incorporate hydrologic data in deriving the fracture networks and thus naturally emphasize the underlying features that impact the fluid flow and transport. However, hydrologic models derived by inversion are non-unique in general. We have addressed such non-uniqueness by examining an ensemble of models that satisfy the observational data within acceptable limits. We then determine properties which are shared by the ensemble of models as well as their associated uncertainties to create a conceptual model of the fracture system.

  2. A general stochastic approach to unavailability analysis of standby safety systems

    SciTech Connect

    Van Der Weide, H.; Pandey, M. D.

    2013-07-01

    The paper presents a general analytical framework to analyze unavailability caused by latent failures in standby safety systems used in nuclear plants. The proposed approach is general in a sense that it encompasses a variety of inspection and maintenance policies and relaxes restrictive assumptions regarding the distributions of time to failure (or aging) and duration of repair. A key result of the paper is a general integral equation for point unavailability, which can be tailored to any specific maintenance policy. (authors)

  3. Link removal for the control of stochastically evolving epidemics over networks: a comparison of approaches.

    PubMed

    Enns, Eva A; Brandeau, Margaret L

    2015-04-21

    For many communicable diseases, knowledge of the underlying contact network through which the disease spreads is essential to determining appropriate control measures. When behavior change is the primary intervention for disease prevention, it is important to understand how to best modify network connectivity using the limited resources available to control disease spread. We describe and compare four algorithms for selecting a limited number of links to remove from a network: two "preventive" approaches (edge centrality, R0 minimization), where the decision of which links to remove is made prior to any disease outbreak and depends only on the network structure; and two "reactive" approaches (S-I edge centrality, optimal quarantining), where information about the initial disease states of the nodes is incorporated into the decision of which links to remove. We evaluate the performance of these algorithms in minimizing the total number of infections that occur over the course of an acute outbreak of disease. We consider different network structures, including both static and dynamic Erdös-Rényi random networks with varying levels of connectivity, a real-world network of residential hotels connected through injection drug use, and a network exhibiting community structure. We show that reactive approaches outperform preventive approaches in averting infections. Among reactive approaches, removing links in order of S-I edge centrality is favored when the link removal budget is small, while optimal quarantining performs best when the link removal budget is sufficiently large. The budget threshold above which optimal quarantining outperforms the S-I edge centrality algorithm is a function of both network structure (higher for unstructured Erdös-Rényi random networks compared to networks with community structure or the real-world network) and disease infectiousness (lower for highly infectious diseases). We conduct a value-of-information analysis of knowing which

  4. Link removal for the control of stochastically evolving epidemics over networks: A comparison of approaches

    PubMed Central

    Brandeau, Margaret L.

    2015-01-01

    For many communicable diseases, knowledge of the underlying contact network through which the disease spreads is essential to determining appropriate control measures. When behavior change is the primary intervention for disease prevention, it is important to understand how to best modify network connectivity using the limited resources available to control disease spread. We describe and compare four algorithms for selecting a limited number of links to remove from a network: two “preventive” approaches (edge centrality, R0 minimization), where the decision of which links to remove is made prior to any disease outbreak and depends only on the network structure; and two “reactive” approaches (S-I edge centrality, optimal quarantining), where information about the initial disease states of the nodes is incorporated into the decision of which links to remove. We evaluate the performance of these algorithms in minimizing the total number of infections that occur over the course of an acute outbreak of disease. We consider different network structures, including both static and dynamic Erdős-Rényi random networks with varying levels of connectivity, a real-world network of residential hotels connected through injection drug use, and a network exhibiting community structure. We show that reactive approaches outperform preventive approaches in averting infections. Among reactive approaches, removing links in order of S-I edge centrality is favored when the link removal budget is small, while optimal quarantining performs best when the link removal budget is sufficiently large. The budget threshold above which optimal quarantining outperforms the S-I edge centrality algorithm is a function of both network structure (higher for unstructured Erdős-Rényi random networks compared to networks with community structure or the real-world network) and disease infectiousness (lower for highly infectious diseases). We conduct a value-of-information analysis of knowing

  5. Stochastic QM/MM Models for Proton Transport in Condensed Phase: An Empirical Valence Bond (EVB) Approach

    NASA Astrophysics Data System (ADS)

    Burykin, Anton; Braun-Sand, Sonja; Warshel, Arieh

    2005-03-01

    Proton transport (PT) plays a major role in biophysics in general and bioenergetics in particular. In view of the crucial role of biological PT processes it is important to gain a quantitative molecular understanding of the factors that control such processes. While modeling actual time-dependent PT in biological systems one has to deal with up to microsecond time scales which are not accessible to QM/MM methods. In order to overcome this problem we have developed a new type of hybrid quantum/classical approach which combines explicit QM (EVB) representation of the chain of donor and acceptors and implicit representation (via the effective coordinates) of the environment (the rest of the protein/water system). The dynamics of the whole QM/MM system is described by stochastic (langevin) equations. This model takes into account the correct physics of proton charge delocalization and the reorganization of solvent polar groups during the PT process. The description of QM/MM langevin dynamics method is given and several applications to biological systems (PT in Gramicidin A channel and Carbonic Anhydrase) are presented.

  6. Evaluating the impact of built environment characteristics on urban boundary layer dynamics using an advanced stochastic approach

    NASA Astrophysics Data System (ADS)

    Song, Jiyun; Wang, Zhi-Hua

    2016-05-01

    Urban land-atmosphere interactions can be captured by numerical modeling framework with coupled land surface and atmospheric processes, while the model performance depends largely on accurate input parameters. In this study, we use an advanced stochastic approach to quantify parameter uncertainty and model sensitivity of a coupled numerical framework for urban land-atmosphere interactions. It is found that the development of urban boundary layer is highly sensitive to surface characteristics of built terrains. Changes of both urban land use and geometry impose significant impact on the overlying urban boundary layer dynamics through modification on bottom boundary conditions, i.e., by altering surface energy partitioning and surface aerodynamic resistance, respectively. Hydrothermal properties of conventional and green roofs have different impacts on atmospheric dynamics due to different surface energy partitioning mechanisms. Urban geometry (represented by the canyon aspect ratio), however, has a significant nonlinear impact on boundary layer structure and temperature. Besides, managing rooftop roughness provides an alternative option to change the boundary layer thermal state through modification of the vertical turbulent transport. The sensitivity analysis deepens our insight into the fundamental physics of urban land-atmosphere interactions and provides useful guidance for urban planning under challenges of changing climate and continuous global urbanization.

  7. Stochastic Kinetics of Viral Capsid Assembly Based on Detailed Protein Structures

    PubMed Central

    Hemberg, Martin; Yaliraki, Sophia N.; Barahona, Mauricio

    2006-01-01

    We present a generic computational framework for the simulation of viral capsid assembly which is quantitative and specific. Starting from PDB files containing atomic coordinates, the algorithm builds a coarse-grained description of protein oligomers based on graph rigidity. These reduced protein descriptions are used in an extended Gillespie algorithm to investigate the stochastic kinetics of the assembly process. The association rates are obtained from a diffusive Smoluchowski equation for rapid coagulation, modified to account for water shielding and protein structure. The dissociation rates are derived by interpreting the splitting of oligomers as a process of graph partitioning akin to the escape from a multidimensional well. This modular framework is quantitative yet computationally tractable, with a small number of physically motivated parameters. The methodology is illustrated using two different viruses which are shown to follow quantitatively different assembly pathways. We also show how in this model the quasi-stationary kinetics of assembly can be described as a Markovian cascading process, in which only a few intermediates and a small proportion of pathways are present. The observed pathways and intermediates can be related a posteriori to structural and energetic properties of the capsid oligomers. PMID:16473916

  8. Stochastic description of quantum Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Yan, Yun-An; Shao, Jiushu

    2016-08-01

    Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems

  9. Numerical Stochastic Homogenization Method and Multiscale Stochastic Finite Element Method - A Paradigm for Multiscale Computation of Stochastic PDEs

    SciTech Connect

    X. Frank Xu

    2010-03-30

    Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will

  10. A stochastic approach to project planning in an R and D environment: Final report

    SciTech Connect

    Seyedghasemipour, S.J.

    1987-02-01

    This study describes a simulation approach to project planning in an R and D environment by network model. GERT (Graphical Evaluation and Review Technique), a network model, was utilized for the modeling of a hypothetical research and development project. GERT is a network model capable of including randomness in activity duration, probabilistic branching, feedback loop, and multiple terminate node in a project planning. These capabilities make it more suitable for modeling of research and development projects than the previous approaches such as CPM and PERT. SLAM II simulation language is utilized for simulation of the network model. SLAM II is a simulation language which heavily relies on GASP IV and Q-GERTS with powerful modeling capability in a single integrated framework. The simulation is performed on a hypothetical ''standard'' research and development project. Two cases of project planning are considered. In the first case, the traditional simulation of network model of the hypothetical R and D project is performed. In the second case, learning factor is incorporated in the simulation process. Learning factor, in the context of project planning, means the mean and variance of a probability distribution representing an activity duration is discounted (reduced) every time that activity is repeated. The results and statistics of each case study concerning expected duration of successful completion of the project, probability of washouts, and realization time of milestones are presented in details. The differences between two cases (i.e., with and without learning factor) are discussed. 19 refs.

  11. A Stochastic Approach To Human Health Risk Assessment Due To Groundwater Contamination

    NASA Astrophysics Data System (ADS)

    de Barros, F. P.; Rubin, Y.

    2006-12-01

    We present a probabilistic framework to addressing adverse human health effects due to groundwater contamination. One of the main challenges in health risk assessment is in relating it to subsurface data acquisition and to improvement in our understanding of human physiological responses to contamination. In this paper we propose to investigate this problem through an approach that integrates flow, transport and human health risk models with hydrogeological characterization. A human health risk cumulative distribution function is analytically developed to account for both uncertainty and variability in hydrogeological as well as human physiological parameters. With our proposed approach, we investigate under which conditions the reduction of uncertainties from flow physics, human physiology and exposure related parameters might contribute to a better understanding of human health risk assessment. Results indicate that the human health risk cumulative distribution function is sensitive to physiological parameters at low risk values associated with longer travel times. The results show that the worth of hydrogeological characterization in human health risk is dependent on the residence time of the contaminant plume in the aquifer and on the exposure duration of the population to certain chemicals.

  12. A simulation-based interval two-stage stochastic model for agricultural non-point source pollution control through land retirement.

    PubMed

    Luo, B; Li, J B; Huang, G H; Li, H L

    2006-05-15

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural non-point source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and "off-site" water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties. PMID:16242757

  13. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    SciTech Connect

    Hosking, John Joseph Absalom

    2012-12-15

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  14. Evaluation of soil characterization technologies using a stochastic, value-of-information approach

    SciTech Connect

    Kaplan, P.G.

    1993-11-01

    The US Department of Energy has initiated an integrated demonstration program to develop and compare new technologies for the characterization of uranium-contaminated soils. As part of this effort, a performance-assessment task was funded in February, 1993 to evaluate the field tested technologies. Performance assessment can be cleaned as the analysis that evaluates a system`s, or technology`s, ability to meet the criteria specified for performance. Four new technologies were field tested at the Fernald Environmental Management Restoration Co. in Ohio. In the next section, the goals of this performance assessment task are discussed. The following section discusses issues that must be resolved if the goals are to be successfully met. The author concludes with a discussion of the potential benefits to performance assessment of the approach taken. This paper is intended to be the first of a series of documentation that describes the work. Also in this proceedings is a paper on the field demonstration at the Fernald site and a description of the technologies (Tidwell et al, 1993) and a paper on the application of advanced geostatistical techniques (Rautman, 1993). The overall approach is to simply demonstrate the applicability of concepts that are well described in the literature but are not routinely applied to problems in environmental remediation, restoration, and waste management. The basic geostatistical concepts are documented in Clark (1979) and in Issaks and Srivastava (1989). Advanced concepts and applications, along with software, are discussed in Deutsch and Journel (1992). Integration of geostatistical modeling with a decision-analytic framework is discussed in Freeze et al (1992). Information-theoretic and probabilistic concepts are borrowed from the work of Shannon (1948), Jaynes (1957), and Harr (1987). The author sees the task as one of introducing and applying robust methodologies with demonstrated applicability in other fields to the problem at hand.

  15. Value of Geographic Diversity of Wind and Solar: Stochastic Geometry Approach; Preprint

    SciTech Connect

    Diakov, V.

    2012-08-01

    Based on the available geographically dispersed data for the continental U.S. (excluding Alaska), we analyze to what extent the geographic diversity of these resources can offset their variability. A geometric model provides a convenient measure for resource variability, shows the synergy between wind and solar resources.

  16. A Bayesian, exemplar-based approach to hierarchical shape matching.

    PubMed

    Gavrila, Dariu M

    2007-08-01

    This paper presents a novel probabilistic approach to hierarchical, exemplar-based shape matching. No feature correspondence is needed among exemplars, just a suitable pairwise similarity measure. The approach uses a template tree to efficiently represent and match the variety of shape exemplars. The tree is generated offline by a bottom-up clustering approach using stochastic optimization. Online matching involves a simultaneous coarse-to-fine approach over the template tree and over the transformation parameters. The main contribution of this paper is a Bayesian model to estimate the a posteriori probability of the object class, after a certain match at a node of the tree. This model takes into account object scale and saliency and allows for a principled setting of the matching thresholds such that unpromising paths in the tree traversal process are eliminated early on. The proposed approach was tested in a variety of application domains. Here, results are presented on one of the more challenging domains: real-time pedestrian detection from a moving vehicle. A significant speed-up is obtained when comparing the proposed probabilistic matching approach with a manually tuned nonprobabilistic variant, both utilizing the same template tree structure. PMID:17568144

  17. Enhanced detection of rolling element bearing fault based on stochastic resonance

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofei; Hu, Niaoqing; Cheng, Zhe; Hu, Lei

    2012-11-01

    Early bearing faults can generate a series of weak impacts. All the influence factors in measurement may degrade the vibration signal. Currently, bearing fault enhanced detection method based on stochastic resonance(SR) is implemented by expensive computation and demands high sampling rate, which requires high quality software and hardware for fault diagnosis. In order to extract bearing characteristic frequencies component, SR normalized scale transform procedures are presented and a circuit module is designed based on parameter-tuning bistable SR. In the simulation test, discrete and analog sinusoidal signals under heavy noise are enhanced by SR normalized scale transform and circuit module respectively. Two bearing fault enhanced detection strategies are proposed. One is realized by pure computation with normalized scale transform for sampled vibration signal, and the other is carried out by designed SR hardware with circuit module for analog vibration signal directly. The first strategy is flexible for discrete signal processing, and the second strategy demands much lower sampling frequency and less computational cost. The application results of the two strategies on bearing inner race fault detection of a test rig show that the local signal to noise ratio of the characteristic components obtained by the proposed methods are enhanced by about 50% compared with the band pass envelope analysis for the bearing with weaker fault. In addition, helicopter transmission bearing fault detection validates the effectiveness of the enhanced detection strategy with hardware. The combination of SR normalized scale transform and circuit module can meet the need of different application fields or conditions, thus providing a practical scheme for enhanced detection of bearing fault.

  18. A rainwater harvesting system reliability model based on nonparametric stochastic rainfall generator

    NASA Astrophysics Data System (ADS)

    Basinger, Matt; Montalto, Franco; Lall, Upmanu

    2010-10-01

    SummaryThe reliability with which harvested rainwater can be used as a means of flushing toilets, irrigating gardens, and topping off air-conditioner serving multifamily residential buildings in New York City is assessed using a new rainwater harvesting (RWH) system reliability model. Although demonstrated with a specific case study, the model is portable because it is based on a nonparametric rainfall generation procedure utilizing a bootstrapped markov chain. Precipitation occurrence is simulated using transition probabilities derived for each day of the year based on the historical probability of wet and dry day state changes. Precipitation amounts are selected from a matrix of historical values within a moving 15 day window that is centered on the target day. RWH system reliability is determined for user-specified catchment area and tank volume ranges using precipitation ensembles generated using the described stochastic procedure. The reliability with which NYC backyard gardens can be irrigated and air conditioning units supplied with water harvested from local roofs exceeds 80% and 90%, respectively, for the entire range of catchment areas and tank volumes considered in the analysis. For RWH systems installed on the most commonly occurring rooftop catchment areas found in NYC (51-75 m 2), toilet flushing demand can be met with 7-40% reliability, with lower end of the range representing buildings with high flow toilets and no storage elements, and the upper end representing buildings that feature low flow fixtures and storage tanks of up to 5 m 3. When the reliability curves developed are used to size RWH systems to flush the low flow toilets of all multifamily buildings found a typical residential neighborhood in the Bronx, rooftop runoff inputs to the sewer system are reduced by approximately 28% over an average rainfall year, and potable water demand is reduced by approximately 53%.

  19. A Consistent Approach To Stochastic Seeding of Simulations of Fragmenting Ductile Metals

    NASA Astrophysics Data System (ADS)

    Barham, Matthew; Stölken, James; Kumar, Mukul

    2013-06-01

    For failure by brittle fracture the well-known weakest-link arguments have led to widespread use of a two-parameter Weibull distribution. The probability of failure by a ductile damage mechanism at small plastic strains is exceedingly small. This results in a threshold for deformation induced damage and attendant failure that should be manifest in the statistical description. A three-parameter Weibull distribution with a lower cut-off satisfies this constraint. The three-parameters are determined systematically from experiments. The Weibull modulus is estimated by examining the results of scaled experiments. The values of the most-likely failure strain were inferred from simulations of quasi-static tests. The lower cut-off failure strain was estimated from the tensile test data. This approach was applied to different microstructures of AISI 4340 steel achieved through various heat treatments to determine the three parameters and constitutive response for each heat treatment. Exploding pipe simulations were run to determine fragment distributions for two explosives and each heat treatment. These simulated distributions were then compared to high fidelity experimental data for distributions of the same heat treatments and explosives simulated. Prepared by LLNL under Contract DE-AC52-07NA27344.

  20. Stochastic bias correction of dynamically downscaled precipitation fields for Germany through Copula-based integration of gridded observation data

    NASA Astrophysics Data System (ADS)

    Mao, G.; Vogl, S.; Laux, P.; Wagner, S.; Kunstmann, H.

    2015-04-01

    Dynamically downscaled precipitation fields from regional climate models (RCMs) often cannot be used directly for regional climate studies. Due to their inherent biases, i.e., systematic over- or underestimations compared to observations, several correction approaches have been developed. Most of the bias correction procedures such as the quantile mapping approach employ a transfer function that is based on the statistical differences between RCM output and observations. Apart from such transfer function-based statistical correction algorithms, a stochastic bias correction technique, based on the concept of Copula theory, is developed here and applied to correct precipitation fields from the Weather Research and Forecasting (WRF) model. For dynamically downscaled precipitation fields we used high-resolution (7 km, daily) WRF simulations for Germany driven by ERA40 reanalysis data for 1971-2000. The REGNIE (REGionalisierung der NIEderschlagshöhen) data set from the German Weather Service (DWD) is used as gridded observation data (1 km, daily) and aggregated to 7 km for this application. The 30-year time series are split into a calibration (1971-1985) and validation (1986-2000) period of equal length. Based on the estimated dependence structure (described by the Copula function) between WRF and REGNIE data and the identified respective marginal distributions in the calibration period, separately analyzed for the different seasons, conditional distribution functions are derived for each time step in the validation period. This finally allows to get additional information about the range of the statistically possible bias-corrected values. The results show that the Copula-based approach efficiently corrects most of the errors in WRF derived precipitation for all seasons. It is also found that the Copula-based correction performs better for wet bias correction than for dry bias correction. In autumn and winter, the correction introduced a small dry bias in the

  1. Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Shen, W.; Zhong, Q.; Shi, B.

    2012-12-01

    Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the

  2. Materiality in a Practice-Based Approach

    ERIC Educational Resources Information Center

    Svabo, Connie

    2009-01-01

    Purpose: The paper aims to provide an overview of the vocabulary for materiality which is used by practice-based approaches to organizational knowing. Design/methodology/approach: The overview is theoretically generated and is based on the anthology Knowing in Organizations: A Practice-based Approach edited by Nicolini, Gherardi and Yanow. The…

  3. Simulation of quantum dynamics based on the quantum stochastic differential equation.

    PubMed

    Li, Ming

    2013-01-01

    The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm. PMID:23781156

  4. Assessment of hydraulic parameters in the phreatic aquifer of Settolo (Italy): a stochastic approach

    NASA Astrophysics Data System (ADS)

    Salandin, P.; Zovi, F.; Camporese, M.

    2012-12-01

    In this work we present and test against real field data an inversion approach for the identification of hydraulic parameters at the aquifer scale. Our test field is the alluvial phreatic aquifer of Settolo, located along the left bank of the Piave River in a piedmont area in Northeastern Italy, with an extension of approximately 6 km2 and exhibiting heterogeneities of the geological structures both at the local and intermediate scales. The area is characterized by the alluvial sediments (mainly gravel in a sandy matrix) deposited by the Piave River during the Last Glacial Maximum, being the subsurface, with an average aquifer thickness of 50 m, crossed by paleo-riverbeds that probably represent the main hydrogeological unit from which water is withdrawn. The interactions between watercourses and the aquifer, the recharge linked to the precipitation, as well as the dynamics of partially penetrating extraction wells must be properly reproduced for an effective protection and a sustainable exploitation of the water resources. In order to do so, in cooperation with Alto Trevigiano Servizi S.r.l., the local water resources management company, a careful site characterization is in progress since 2009, with a number of different measurements and scales involved. Besides surface ERT, water quality surveys, and a tracer test, we highlight here the role of 18 continuously monitored observation wells, available in the study area for the measurement of the water table dynamics and the calibration/validation of groundwater models. A preliminary comparison with the results of a three-dimensional Richards model demonstrated that the site can be properly described by means of a two-dimensional finite element solver of the nonlinear Dupuit-Boussinesq equation, saving CPU time and computer storage. Starting from an ensemble of randomly generated and spatially correlated hydraulic conductivity (K) fields, the fit between water table observations and model predictions is measured

  5. A Stochastic-entropic Approach to Detect Persistent Low-temperature Volcanogenic Thermal Anomalies

    NASA Astrophysics Data System (ADS)

    Pieri, D. C.; Baxter, S.

    2011-12-01

    distributions over time, temperature contrast, and Shannon entropy. Preliminary analyses of Fogo Volcano and Yellowstone hotspots, among others, indicate that this is a very sensitive technique with good potential to be applied over the entire ASTER global night-time archive. We will discuss our progress in creating the global thermal anomaly catalog as well as algorithm approach and results. This work was carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to NASA.

  6. A novel approach to phylogenetic tree construction using stochastic optimization and clustering

    PubMed Central

    Qin, Ling; Chen, Yixin; Pan, Yi; Chen, Ling

    2006-01-01

    Background The problem of inferring the evolutionary history and constructing the phylogenetic tree with high performance has become one of the major problems in computational biology. Results A new phylogenetic tree construction method from a given set of objects (proteins, species, etc.) is presented. As an extension of ant colony optimization, this method proposes an adaptive phylogenetic clustering algorithm based on a digraph to find a tree structure that defines the ancestral relationships among the given objects. Conclusion Our phylogenetic tree construction method is tested to compare its results with that of the genetic algorithm (GA). Experimental results show that our algorithm converges much faster and also achieves higher quality than GA. PMID:17217517

  7. Stochastic Modeling and Simulation of Near-Fault Ground Motions for Performance-Based Earthquake Engineering

    NASA Astrophysics Data System (ADS)

    Dabaghi, Mayssa Nabil

    A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as the natural variability of motions for a given earthquake and site scenario. By fitting the model to a database of recorded near-fault ground motions with known earthquake source and site characteristics, empirical "observations" of the model parameters are obtained. These observations are used to develop predictive equations for the model parameters in terms of a small number of earthquake source and site characteristics. Functional forms for the predictive equations that are consistent with seismological theory are employed. A site-based simulation procedure that employs the proposed stochastic model and predictive equations is developed to generate synthetic near-fault ground motions at a site. The procedure is formulated in terms of information about the earthquake design scenario that is normally available to a design engineer. Not all near-fault ground motions contain a forward directivity pulse, even when the conditions for such a pulse are favorable. The proposed procedure produces pulselike and non-pulselike motions in the same proportions as they naturally occur among recorded near-fault ground motions for a given design scenario. The proposed models and simulation procedure are validated by several means. Synthetic ground motion time series with fitted parameter values are compared with the corresponding recorded motions. The proposed empirical predictive relations are compared to similar relations available in the literature. The overall simulation procedure is

  8. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    NASA Astrophysics Data System (ADS)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  9. Hybrid Stochastic Search Technique based Suboptimal AGC Regulator Design for Power System using Constrained Feedback Control Strategy

    NASA Astrophysics Data System (ADS)

    Ibraheem, Omveer, Hasan, N.

    2010-10-01

    A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.

  10. A suboptimal stochastic controller for an N-body spacecraft

    NASA Technical Reports Server (NTRS)

    Larson, V.

    1973-01-01

    Considerable attention, in the open literature, is being focused on the problem of developing a suitable set of deterministic dynamical equations for a complex spacecraft. This paper considers the problem of determining a stochastic optimal controller for an n-body spacecraft. The approach used in obtaining the stochastic controller involves the application, interpretation, and combination of advanced dynamical principles and the theoretical aspects of modern control theory. The stochastic controller obtained herein for a complicated model of a spacecraft uses sensor angular measurements associated with the base body to obtain smoothed estimates of the entire state vector. It can be easily implemented, and it enables system performance to be significantly improved.

  11. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  12. Existence Theory for Stochastic Power Law Fluids

    NASA Astrophysics Data System (ADS)

    Breit, Dominic

    2015-06-01

    We consider the equations of motion for an incompressible non-Newtonian fluid in a bounded Lipschitz domain during the time interval (0, T) together with a stochastic perturbation driven by a Brownian motion W. The balance of momentum reads as where v is the velocity, the pressure and f an external volume force. We assume the common power law model and show the existence of martingale weak solution provided . Our approach is based on the -truncation and a harmonic pressure decomposition which are adapted to the stochastic setting.

  13. Detecting bacteria and Determining Their Susceptibility to Antibiotics by Stochastic Confinement in Nanoliter Droplets using Plug-Based Microfluidics

    SciTech Connect

    Boedicker, J.; Li, L; Kline, T; Ismagilov, R

    2008-01-01

    This article describes plug-based microfluidic technology that enables rapid detection and drug susceptibility screening of bacteria in samples, including complex biological matrices, without pre-incubation. Unlike conventional bacterial culture and detection methods, which rely on incubation of a sample to increase the concentration of bacteria to detectable levels, this method confines individual bacteria into droplets nanoliters in volume. When single cells are confined into plugs of small volume such that the loading is less than one bacterium per plug, the detection time is proportional to plug volume. Confinement increases cell density and allows released molecules to accumulate around the cell, eliminating the pre-incubation step and reducing the time required to detect the bacteria. We refer to this approach as stochastic confinement. Using the microfluidic hybrid method, this technology was used to determine the antibiogram - or chart of antibiotic sensitivity - of methicillin-resistant Staphylococcus aureus (MRSA) to many antibiotics in a single experiment and to measure the minimal inhibitory concentration (MIC) of the drug cefoxitin (CFX) against this strain. In addition, this technology was used to distinguish between sensitive and resistant strains of S. aureus in samples of human blood plasma. High-throughput microfluidic techniques combined with single-cell measurements also enable multiple tests to be performed simultaneously on a single sample containing bacteria. This technology may provide a method of rapid and effective patient-specific treatment of bacterial infections and could be extended to a variety of applications that require multiple functional tests of bacterial samples on reduced timescales.

  14. Stochastic games

    PubMed Central

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883

  15. A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion

    SciTech Connect

    Jenny, Patrick Torrilhon, Manuel; Heinz, Stefan

    2010-02-20

    In this paper, a stochastic model is presented to simulate the flow of gases, which are not in thermodynamic equilibrium, like in rarefied or micro situations. For the interaction of a particle with others, statistical moments of the local ensemble have to be evaluated, but unlike in molecular dynamics simulations or DSMC, no collisions between computational particles are considered. In addition, a novel integration technique allows for time steps independent of the stochastic time scale. The stochastic model represents a Fokker-Planck equation in the kinetic description, which can be viewed as an approximation to the Boltzmann equation. This allows for a rigorous investigation of the relation between the new model and classical fluid and kinetic equations. The fluid dynamic equations of Navier-Stokes and Fourier are fully recovered for small relaxation times, while for larger values the new model extents into the kinetic regime. Numerical studies demonstrate that the stochastic model is consistent with Navier-Stokes in that limit, but also that the results become significantly different, if the conditions for equilibrium are invalid. The application to the Knudsen paradox demonstrates the correctness and relevance of this development, and comparisons with existing kinetic equations and standard solution algorithms reveal its advantages. Moreover, results of a test case with geometrically complex boundaries are presented.

  16. Stochastic dilution effects weaken deterministic effects of niche-based processes in species rich forests.

    PubMed

    Wang, Xugao; Wiegand, Thorsten; Kraft, Nathan J B; Swenson, Nathan G; Davies, Stuart J; Hao, Zhanqing; Howe, Robert; Lin, Yiching; Ma, Keping; Mi, Xiangcheng; Su, Sheng-Hsin; Sun, I-fang; Wolf, Amy

    2016-02-01

    Recent theory predicts that stochastic dilution effects may result in species-rich communities with statistically independent species spatial distributions, even if the underlying ecological processes structuring the community are driven by deterministic niche differences. Stochastic dilution is a consequence of the stochastic geometry of biodiversity where the identities of the nearest neighbors of individuals of a given species are largely unpredictable. Under such circumstances, the outcome of deterministic species interactions may vary greatly among individuals of a given species. Consequently, nonrandom patterns in the biotic neighborhoods of species, which might be expected from coexistence or community assembly theory (e.g., individuals of a given species are neighbored by phylogenetically similar species), are weakened or do not emerge, resulting in statistical independence of species spatial distributions. We used data on phylogenetic and functional similarity of tree species in five large forest dynamics plots located across a gradient of species richness to test predictions of the stochastic dilution hypothesis. To quantify the biotic neighborhood of a focal species we used the mean phylogenetic (or functional) dissimilarity of the individuals of the focal species to all species within a local neighborhood. We then compared the biotic neighborhood of species to predictions from stochastic null models to test if a focal species was surrounded by more or less similar species than expected by chance. The proportions of focal species that showed spatial independence with respect to their biotic neighborhoods increased with total species richness. Locally dominant, high-abundance species were more likely to be surrounded by species that were statistically more similar or more dissimilar than expected by chance. Our results suggest that stochasticity may play a stronger role in shaping the spatial structure of species rich tropical forest communities than it

  17. Comparing large-scale computational approaches to epidemic modeling: agent based versus structured metapopulation models

    NASA Astrophysics Data System (ADS)

    Gonçalves, Bruno; Ajelli, Marco; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José; Merler, Stefano; Vespignani, Alessandro

    2010-03-01

    We provide for the first time a side by side comparison of the results obtained with a stochastic agent based model and a structured metapopulation stochastic model for the evolution of a baseline pandemic event in Italy. The Agent Based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high resolution census data worldwide, and integrating airline travel flow data with short range human mobility patterns at the global scale. Both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing of the order of few days. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes.

  18. A stochastic model for the polygonal tundra based on Poisson-Voronoi Diagrams

    NASA Astrophysics Data System (ADS)

    Cresto Aleina, F.; Brovkin, V.; Muster, S.; Boike, J.; Kutzbach, L.; Sachs, T.; Zuyev, S.

    2012-12-01

    Sub-grid and small scale processes occur in various ecosystems and landscapes (e.g., periglacial ecosystems, peatlands and vegetation patterns). These local heterogeneities are often important or even fundamental to better understand general and large scale properties of the system, but they are either ignored or poorly parameterized in regional and global models. Because of their small scale, the underlying generating processes can be well explained and resolved only by local mechanistic models, which, on the other hand, fail to consider the regional or global influences of those features. A challenging problem is then how to deal with these interactions across different spatial scales, and how to improve our understanding of the role played by local soil heterogeneities in the climate system. This is of particular interest in the northern peatlands, because of the huge amount of carbon stored in these regions. Land-atmosphere greenhouse gas fluxes vary dramatically within these environments. Therefore, to correctly estimate the fluxes a description of the small scale soil variability is needed. Applications of statistical physics methods could be useful tools to upscale local features of the landscape, relating them to large-scale properties. To test this approach we considered a case study: the polygonal tundra. Cryogenic polygons, consisting mainly of elevated dry rims and wet low centers, pattern the terrain of many subartic regions and are generated by complex crack-and-growth processes. Methane, carbon dioxide and water vapor fluxes vary largely within the environment, as an effect of the small scale processes that characterize the landscape. It is then essential to consider the local heterogeneous behavior of the system components, such as the water table level inside the polygon wet centers, or the depth at which frozen soil thaws. We developed a stochastic model for this environment using Poisson-Voronoi diagrams, which is able to upscale statistical

  19. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem

    PubMed Central

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842

  20. Evaluation of predictions of the stochastic model of organelle production based on exact distributions

    PubMed Central

    Craven, C Jeremy

    2016-01-01

    We present a reanalysis of the stochastic model of organelle production and show that the equilibrium distributions for the organelle numbers predicted by this model can be readily calculated in three different scenarios. These three distributions can be identified as standard distributions, and the corresponding exact formulae for their mean and variance can therefore be used in further analysis. This removes the need to rely on stochastic simulations or approximate formulae (derived using the fluctuation dissipation theorem). These calculations allow for further analysis of the predictions of the model. On the basis of this we question the extent to which the model can be used to conclude that peroxisome biogenesis is dominated by de novo production when Saccharomyces cerevisiae cells are grown on glucose medium. DOI: http://dx.doi.org/10.7554/eLife.10167.001 PMID:26783763

  1. Set-based corral control in stochastic dynamical systems: Making almost invariant sets more invariant

    PubMed Central

    Forgoston, Eric; Billings, Lora; Yecko, Philip; Schwartz, Ira B.

    2011-01-01

    We consider the problem of stochastic prediction and control in a time-dependent stochastic environment, such as the ocean, where escape from an almost invariant region occurs due to random fluctuations. We determine high-probability control-actuation sets by computing regions of uncertainty, almost invariant sets, and Lagrangian coherent structures. The combination of geometric and probabilistic methods allows us to design regions of control, which provide an increase in loitering time while minimizing the amount of control actuation. We show how the loitering time in almost invariant sets scales exponentially with respect to the control actuation, causing an exponential increase in loitering times with only small changes in actuation force. The result is that the control actuation makes almost invariant sets more invariant. PMID:21456830

  2. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.

    PubMed

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842

  3. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  4. Pareto-based efficient stochastic simulation-optimization for robust and reliable groundwater management

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine; Wolf, Leif

    2016-02-01

    Simulation-optimization methods are used to develop optimal solutions for a variety of groundwater management problems. The true optimality of these solutions is often dependent on the reliability of the simulation model. Therefore, where model predictions are uncertain due to parameter uncertainty, this should be accounted for within the optimization formulation to ensure that solutions are robust and reliable. In this study, we present a stochastic multi-objective formulation of the otherwise single objective groundwater optimization problem by considering minimization of prediction uncertainty as an additional objective. The proposed method is illustrated by applying to an injection bore field design problem. The primary objective of optimization is maximization of the total volume of water injected into a confined aquifer, subject to the constraints that the resulting increases in hydraulic head in a set of control bores are below specified target levels. Both bore locations and injection rates were considered as optimization variables. Prediction uncertainty is estimated using stacks of uncertain parameters and is explicitly minimized to produce robust and reliable solutions. Reliability analysis using post-optimization Monte Carlo analysis proved that while a stochastic single objective optimization failed to provide reliable solutions with a stack size of 50, the proposed method resulted in many robust solutions with high reliability close to 1.0. Results of the comparison indicate potential gains in efficiency of the stochastic multi-objective formulation to identify robust and reliable groundwater management strategies.

  5. Investigating the Effect of Hydraulic Data and Heterogeneity on Stochastic Inversion of a Physically Based Groundwater Model

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, Y.

    2014-12-01

    This research explores the interactions between data quantity, data quality and heterogeneity resolution on stochastic inversion of a physically based model. To further investigate aquifer heterogeneity, simulations are used to examine the impact of geostatistical models on inversion quality, as well as the spatial sensitivity to heterogeneity using local and global methods. The model domain is a two-dimensional steady-state confined aquifer with lateral flows through two hydrofacies with alternating patterns.To examine general effects, the control variable method was adopted to reveal the impact of three factors on estimated hydraulic conductivity (K) and hydraulic head boundary conditions (BCs): (1) data availability, (2) data error, and (3) characterization of heterogeneity. Results show that fewer data increase model sensitivity to measurement error and heterogeneity. Extremely large data errors can cause severe model deterioration, regardless of sufficient data availability or high resolution representation of heterogeneity. Smaller data errors can alleviate the bias caused by the limited observations. For heterogeneity resolution, once general patterns of geological structures are captured, its influence is minimal compared to the other factors.Next, two geostatistical models (spherical and exponential variograms), were used to explore the representation of heterogeneity under the same nugget effects. The results show that stochastic inversion based on the exponential variogram improves both the precision and accuracy of the inverse model, as compared to the spherical variogram. This difference is particularly important for determining accurate BCs through stochastic inversion.Last, sensitivity analysis was conducted to further investigate the effect of varying the K of each hydrofacies on model inversion. Results from the partial local method show that the inversion is more sensitive to perturbations of K in regions with high heterogeneity. Using the

  6. An extended structure-based model based on a stochastic eddy-axis evolution equation

    NASA Technical Reports Server (NTRS)

    Kassinos, S. C.; Reynolds, W. C.

    1995-01-01

    We have proposed and implemented an extension of the structure-based model for weak deformations. It was shown that the extended model will correctly reduce to the form of standard k-e models for the case of equilibrium under weak mean strain. The realizability of the extended model is guaranteed by the method of its construction. The predictions of the proposed model were very good for rotating homogeneous shear flows and for irrotational axisymmetric contraction, but were seriously deficient in the case of plane strain and axisymmetric expansion. We have concluded that the problem behind these difficulties lies in the algebraic constitutive equation relating the Reynolds stresses to the structure parameters rather than in the slow model developed here. In its present form, this equation assumes that under irrotational strain the principal axes of the Reynolds stresses remain locked onto those of the eddy-axis tensor. This is correct in the RDT limit, but inappropriate under weaker mean strains, when the non-linear eddy-eddy interactions tend to misalign the two sets of principal axes and create some non-zero theta and gamma.

  7. Development of Simulator Based on Stochastic Switched ARX Model for Refrigeration System with Ice Thermal Storage System

    NASA Astrophysics Data System (ADS)

    Shioya, Tsubasa; Fujimoto, Yasutaka

    In this paper, we introduce a simulator for ice thermal storage systems. Basically, the refrigeration system is modeled as a linear discrete-time system. For system identifications, the least square method is used. However, it is difficult to identify the switching time of the electromagnetic valve of brine pipes attached at showcases accurately by this method. In order to overcome this difficulty, a simulator based on the stochastic switched ARX model is developed. The data obtained from the simulator are compared with actual data. We verify the effectiveness of the proposed simulator.

  8. Stochastic models of intracellular calcium signals

    NASA Astrophysics Data System (ADS)

    Rüdiger, Sten

    2014-01-01

    Cellular signaling operates in a noisy environment shaped by low molecular concentrations and cellular heterogeneity. For calcium release through intracellular channels-one of the most important cellular signaling mechanisms-feedback by liberated calcium endows fluctuations with critical functions in signal generation and formation. In this review it is first described, under which general conditions the environment makes stochasticity relevant, and which conditions allow approximating or deterministic equations. This analysis provides a framework, in which one can deduce an efficient hybrid description combining stochastic and deterministic evolution laws. Within the hybrid approach, Markov chains model gating of channels, while the concentrations of calcium and calcium binding molecules (buffers) are described by reaction-diffusion equations. The article further focuses on the spatial representation of subcellular calcium domains related to intracellular calcium channels. It presents analysis for single channels and clusters of channels and reviews the effects of buffers on the calcium release. For clustered channels, we discuss the application and validity of coarse-graining as well as approaches based on continuous gating variables (Fokker-Planck and chemical Langevin equations). Comparison with recent experiments substantiates the stochastic and spatial approach, identifies minimal requirements for a realistic modeling, and facilitates an understanding of collective channel behavior. At the end of the review, implications of stochastic and local modeling for the generation and properties of cell-wide release and the integration of calcium dynamics into cellular signaling models are discussed.

  9. [Orbitozygomatic approaches to the skull base].

    PubMed

    Cherekaev, V A; Gol'bin, D A; Belov, A I; Radchenkov, N S; Lasunin, N V; Vinokurov, A G

    2015-01-01

    The paper is written in the lecture format and dedicated to one of the main basal approaches, the orbitozygomatic approach, that has been widely used by neurosurgeons for several decades. The authors describe the historical background of the approach development and the surgical technique features and also analyze the published data about application of the orbitozygomatic approach in surgery for skull base tumors and cerebral aneurysms. PMID:26529627

  10. Efficient Stochastic Model Simulation by Using Zassenhaus Formula Approximation and Kronecker Product Analysis

    NASA Astrophysics Data System (ADS)

    Umut Caglar, Mehmet; Pal, Ranadip

    2012-10-01

    Biological systems are inherently stochastic such that they require the use of probabilistic models to understand and simulate their behaviors. However, stochastic models are extremely complex and computationally expensive which restricts their application to smaller order systems. Probabilistic modeling of larger systems can help to recognize the underlying mechanisms of complex diseases, including cancer. The fine-scale stochastic behavior of genetic regulatory networks is often modeled using stochastic master equations. The inherently high computational complexity of the stochastic master equation simulation presents a challenge in its application to biological system modeling even when the model parameters can be properly estimated. In this article, we present a new approach to stochastic model simulation based on Kronecker product analysis and approximation of Zassenhaus formula for matrix exponentials. Simulation results illustrate the comparative performance of our modeling approach to stochastic master equations with significantly lower computational complexity. We also provide a stochastic upper bound on the deviation of the steady state distribution of our model from the steady state distribution of the stochastic master equation.

  11. Modeling of chemotactic steering of bacteria-based microrobot using a population-scale approach.

    PubMed

    Cho, Sunghoon; Choi, Young Jin; Zheng, Shaohui; Han, Jiwon; Ko, Seong Young; Park, Jong-Oh; Park, Sukho

    2015-09-01

    The bacteria-based microrobot (Bacteriobot) is one of the most effective vehicles for drug delivery systems. The bacteriobot consists of a microbead containing therapeutic drugs and bacteria as a sensor and an actuator that can target and guide the bacteriobot to its destination. Many researchers are developing bacteria-based microrobots and establishing the model. In spite of these efforts, a motility model for bacteriobots steered by chemotaxis remains elusive. Because bacterial movement is random and should be described using a stochastic model, bacterial response to the chemo-attractant is difficult to anticipate. In this research, we used a population-scale approach to overcome the main obstacle to the stochastic motion of single bacterium. Also known as Keller-Segel's equation in chemotaxis research, the population-scale approach is not new. It is a well-designed model derived from transport theory and adaptable to any chemotaxis experiment. In addition, we have considered the self-propelled Brownian motion of the bacteriobot in order to represent its stochastic properties. From this perspective, we have proposed a new numerical modelling method combining chemotaxis and Brownian motion to create a bacteriobot model steered by chemotaxis. To obtain modeling parameters, we executed motility analyses of microbeads and bacteriobots without chemotactic steering as well as chemotactic steering analysis of the bacteriobots. The resulting proposed model shows sound agreement with experimental data with a confidence level <0.01. PMID:26487902

  12. Stochastic flux analysis of chemical reaction networks

    PubMed Central

    2013-01-01

    Background Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. Results We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. Conclusions We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network. PMID:24314153

  13. Measuring the Efficiency of a Hospital based on the Econometric Stochastic Frontier Analysis (SFA) Method

    PubMed Central

    Rezaei, Satar; Zandian, Hamed; Baniasadi, Akram; Moghadam, Telma Zahirian; Delavari, Somayeh; Delavari, Sajad

    2016-01-01

    Introduction Hospitals are the most expensive health services provider in the world. Therefore, the evaluation of their performance can be used to reduce costs. The aim of this study was to determine the efficiency of the hospitals at the Kurdistan University of Medical Sciences using stochastic frontier analysis (SFA). Methods This was a cross-sectional and retrospective study that assessed the performance of Kurdistan teaching hospitals (n = 12) between 2007 and 2013. The Stochastic Frontier Analysis method was used to achieve this aim. The numbers of active beds, nurses, physicians, and other staff members were considered as input variables, while the inpatient admission was considered as the output. The data were analyzed using Frontier 4.1 software. Results The mean technical efficiency of the hospitals we studied was 0.67. The results of the Cobb-Douglas production function showed that the maximum elasticity was related to the active beds and the elasticity of nurses was negative. Also, the return to scale was increasing. Conclusion The results of this study indicated that the performances of the hospitals were not appropriate in terms of technical efficiency. In addition, there was a capacity enhancement of the output of the hospitals, compared with the most efficient hospitals studied, of about33%. It is suggested that the effect of various factors, such as the quality of health care and the patients’ satisfaction, be considered in the future studies to assess hospitals’ performances. PMID:27054014

  14. An imaging-based stochastic model for simulation of tumour vasculature

    PubMed Central

    Adhikarla, Vikram; Jeraj, Robert

    2013-01-01

    A mathematical model which reconstructs the structure of existing vasculature using patient-specific anatomical, functional and molecular imaging as input was developed. The vessel structure is modelled according to empirical vascular parameters, such as the mean vessel branching angle. The model is calibrated such that the resultant oxygen map modelled from the simulated microvasculature stochastically matches the input oxygen map to a high degree of accuracy (R2 ≈ 1). The calibrated model was successfully applied to preclinical imaging data. Starting from the anatomical vasculature image (obtained from contrast-enhanced computed tomography), a representative map of the complete vasculature was stochastically simulated as determined by the oxygen map (obtained from hypoxia [64Cu]Cu-ATSM positron emission tomography). The simulated microscopic vasculature and the calculated oxygenation map successfully represent the imaged hypoxia distribution (R2 = 0.94). The model elicits the parameters required to simulate vasculature consistent with imaging and provides a key mathematical relationship relating the vessel volume to the tissue oxygen tension. Apart from providing an excellent framework for visualizing the imaging gap between the microscopic and macroscopic imagings, the model has the potential to be extended as a tool to study the dynamics between the tumour and the vasculature in a patient-specific manner and has an application in the simulation of anti-angiogenic therapies. PMID:22971525

  15. Stochastic averaging based on generalized harmonic functions for energy harvesting systems

    NASA Astrophysics Data System (ADS)

    Jiang, Wen-An; Chen, Li-Qun

    2016-09-01

    A stochastic averaging method is proposed for nonlinear vibration energy harvesters subject to Gaussian white noise excitation. The generalized harmonic transformation scheme is applied to decouple the electromechanical equations, and then obtained an equivalent nonlinear system which is uncoupled to an electric circuit. The frequency function is given through the equivalent potential energy which is independent of the total energy. The stochastic averaging method is developed by using the generalized harmonic functions. The averaged Itô equations are derived via the proposed procedure, and the Fokker-Planck-Kolmogorov (FPK) equations of the decoupled system are established. The exact stationary solution of the averaged FPK equation is used to determine the probability densities of the amplitude and the power of the stationary response. The procedure is applied to three different type Duffing vibration energy harvesters under Gaussian white excitations. The effects of the system parameters on the mean-square voltage and the output power are examined. It is demonstrated that quadratic nonlinearity only and quadratic combined with properly cubic nonlinearities can increase the mean-square voltage and the output power, respectively. The approximate analytical outcomes are qualitatively and quantitatively supported by the Monte Carlo simulations.

  16. pth moment exponential stochastic synchronization of coupled memristor-based neural networks with mixed delays via delayed impulsive control.

    PubMed

    Yang, Xinsong; Cao, Jinde; Qiu, Jianlong

    2015-05-01

    This paper concerns the pth moment synchronization in an array of generally coupled memristor-based neural networks with time-varying discrete delays, unbounded distributed delays, as well as stochastic perturbations. Hybrid controllers are designed to cope with the uncertainties caused by the state-dependent parameters: (a) state feedback controllers combined with delayed impulsive controller; (b) adaptive controller combined with delayed impulsive controller. Based on an impulsive differential inequality, the properties of random variables, the framework of Filippov solution, and Lyapunov functional method, sufficient conditions are derived to guarantee that the considered coupled memristor-based neural networks can be pth moment globally exponentially synchronized onto an isolated node under both of the two classes of hybrid impulsive controllers. Finally, numerical simulations are given to show the effectiveness of the theoretical results. PMID:25703512

  17. Robust myoelectric signal detection based on stochastic resonance using multiple-surface-electrode array made of carbon nanotube composite paper

    NASA Astrophysics Data System (ADS)

    Shirata, Kento; Inden, Yuki; Kasai, Seiya; Oya, Takahide; Hagiwara, Yosuke; Kaeriyama, Shunichi; Nakamura, Hideyuki

    2016-04-01

    We investigated the robust detection of surface electromyogram (EMG) signals based on the stochastic resonance (SR) phenomenon, in which the response to weak signals is optimized by adding noise, combined with multiple surface electrodes. Flexible carbon nanotube composite paper (CNT-cp) was applied to the surface electrode, which showed good performance that is comparable to that of conventional Ag/AgCl electrodes. The SR-based EMG signal system integrating an 8-Schmitt-trigger network and the multiple-CNT-cp-electrode array successfully detected weak EMG signals even when the subject’s body is in the motion, which was difficult to achieve using the conventional technique. The feasibility of the SR-based EMG detection technique was confirmed by demonstrating its applicability to robot hand control.

  18. Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Abdulle, Assyr; Blumenthal, Adrian

    2013-10-01

    A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

  19. Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.

    PubMed

    Sormaz, Milos; Stamm, Tobias; Jenny, Patrick

    2010-05-01

    This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo. PMID:20448777

  20. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    SciTech Connect

    Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.

    2012-04-15

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.

  1. Variability of tsunami inundation footprints considering stochastic scenarios based on a single rupture model: Application to the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Goda, Katsuichiro; Yasuda, Tomohiro; Mori, Nobuhito; Mai, P. Martin

    2015-06-01

    The sensitivity and variability of spatial tsunami inundation footprints in coastal cities and towns due to a megathrust subduction earthquake in the Tohoku region of Japan are investigated by considering different fault geometry and slip distributions. Stochastic tsunami scenarios are generated based on the spectral analysis and synthesis method with regards to an inverted source model. To assess spatial inundation processes accurately, tsunami modeling is conducted using bathymetry and elevation data with 50 m grid resolutions. Using the developed methodology for assessing variability of tsunami hazard estimates, stochastic inundation depth maps can be generated for local coastal communities. These maps are important for improving disaster preparedness by understanding the consequences of different situations/conditions, and by communicating uncertainty associated with hazard predictions. The analysis indicates that the sensitivity of inundation areas to the geometrical parameters (i.e., top-edge depth, strike, and dip) depends on the tsunami source characteristics and the site location, and is therefore complex and highly nonlinear. The variability assessment of inundation footprints indicates significant influence of slip distributions. In particular, topographical features of the region, such as ria coast and near-shore plain, have major influence on the tsunami inundation footprints.

  2. Probing the anisotropies of a stochastic gravitational-wave background using a network of ground-based laser interferometers

    SciTech Connect

    Thrane, Eric; Mandic, Vuk; Ballmer, Stefan; Romano, Joseph D.; Mitra, Sanjit; Talukder, Dipongkar; Bose, Sukanta

    2009-12-15

    We present a maximum-likelihood analysis for estimating the angular distribution of power in an anisotropic stochastic gravitational-wave background using ground-based laser interferometers. The standard isotropic and gravitational-wave radiometer searches (optimal for point sources) are recovered as special limiting cases. The angular distribution can be decomposed with respect to any set of basis functions on the sky, and the single-baseline, cross-correlation analysis is easily extended to a network of three or more detectors--that is, to multiple baselines. A spherical-harmonic decomposition, which provides maximum-likelihood estimates of the multipole moments of the gravitational-wave sky, is described in detail. We also discuss (i) the covariance matrix of the estimators and its relationship to the detector response of a network of interferometers, (ii) a singular-value decomposition method for regularizing the deconvolution of the detector response from the measured sky map, (iii) the expected increase in sensitivity obtained by including multiple baselines, and (iv) the numerical results of this method when applied to simulated data consisting of both pointlike and diffuse sources. Comparisons between this general method and the standard isotropic and radiometer searches are given throughout, to make contact with the existing literature on stochastic background searches.

  3. Validation of Individual-Based Markov-Like Stochastic Process Model of Insect Behavior and a "Virtual Farm" Concept for Enhancement of Site-Specific IPM.

    PubMed

    Lux, Slawomir A; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin

    2016-01-01

    The paper reports application of a Markov-like stochastic process agent-based model and a "virtual farm" concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a "bottom-up ethological" approach and emulates behavior of the "primary IPM actors"-large cohorts of individual insects-within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the "virtual farm" approach-were discussed. PMID:27602000

  4. Simulating reservoir releases to mitigate climate impacts on fish sustainability below Shasta Lake using stochastic and mechanistic modeling approaches

    NASA Astrophysics Data System (ADS)

    Sapin, J. R.; Saito, L.; Rajagopalan, B.; Caldwell, R. J.

    2013-12-01

    Preservation of the Chinook salmon fishery on the Sacramento River in California has been a major concern since the winter-run Chinook was listed as threatened in 1989. The construction of Shasta Dam and Reservoir in 1945 prevented the salmon from reaching their native cold-water spawning habitat, resulting in severe population declines. The temperature control device (TCD) installed at Shasta Dam in 1997 provides increased capabilities of supplying cold-water habitat downstream of the dam to stimulate salmon spawning. However, increased air temperatures due to climate change could make it more difficult to meet downstream temperature targets with the TCD. By coupling stochastic hydroclimatology generation with two-dimensional hydrodynamic modeling of the reservoir we can simulate TCD operations under extreme climate conditions. This is accomplished by stochastically generating climate and inflow scenarios (created with historical data from NOAA, USGS and USBR) as input into a CE-QUAL-W2 model of the reservoir that can simulate TCD operations. Simulations will investigate if selective withdrawal from multiple gates of the TCD are capable of meeting temperature targets downstream of the dam under extreme hydroclimatic conditions. Moreover, our non-parametric methods for stochastically generating climate and inflow scenarios are capable of producing statistically representative years of extreme wet or extreme dry conditions beyond what is seen in the historical record. This allows us to simulate TCD operations for unprecedented hydroclimatic conditions with implications for climate changes in the watershed. Preliminary results of temperature outputs from simulations of TCD operations under extreme climate conditions with CE-QUAL-W2 will be presented. The conditions chosen for simulation are grounded to real-world managerial concerns by utilizing collaborative workshops with reservoir managers to establish which hydroclimatic scenarios would be of most concern for

  5. Stochastic simulations of switching error in magneto elastic and spin-Hall effect based switching of nanomagnetic devices

    NASA Astrophysics Data System (ADS)

    Al-Rashid, Md Mamun; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha

    2015-03-01

    Switching of single domain multiferroic nanomagnets with electrically generated mechanical strain and with spin torque due to spin current generated via the giant spin Hall effect are two promising energy-efficient methods to switch nanomagnets in magnetic computing devices. However, switching of nanomagnets is always error-prone at room temperature owing to the effect of thermal noise. In this work, we model the strain-based and spin-Hall-effect-based switching of nanomagnetic devices using stochastic Landau-Lifshitz-Gilbert (LLG) equation and present a quantitative comparison in terms of switching time, reliability and energy dissipation. This work is supported by the US National Science Foundation under the SHF-Small Grant CCF-1216614, CAREER Grant CCF-1253370, NEB 2020 Grant ECCS-1124714 and SRC under NRI Task 2203.001.

  6. Study of detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance model.

    PubMed

    Jingyi, Zhu

    2015-01-01

    The detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance (MSR) model was studied in this paper. A numerically stimulating model based on MSR was established. And gas-ionizing experiment by adding electronic white noise to induce 1.65 MHz periodic component in the carbon nanotubes gas sensor was performed. It was found that the signal-to-noise ratio (SNR) spectrum displayed 2 maximal values, which accorded to the change of the broken-line potential function. The experimental results of gas-ionizing experiment demonstrated that periodic component of 1.65 MHz had multiple MSR phenomena, which was in accordance with the numerical stimulation results. In this way, the numerical stimulation method provides an innovative method for the detecting mechanism research of carbon nanotubes gas sensor. PMID:26198910

  7. Study of detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance model

    PubMed Central

    Jingyi, Zhu

    2015-01-01

    The detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance (MSR) model was studied in this paper. A numerically stimulating model based on MSR was established. And gas-ionizing experiment by adding electronic white noise to induce 1.65 MHz periodic component in the carbon nanotubes gas sensor was performed. It was found that the signal-to-noise ratio (SNR) spectrum displayed 2 maximal values, which accorded to the change of the broken-line potential function. The experimental results of gas-ionizing experiment demonstrated that periodic component of 1.65 MHz had multiple MSR phenomena, which was in accordance with the numerical stimulation results. In this way, the numerical stimulation method provides an innovative method for the detecting mechanism research of carbon nanotubes gas sensor. PMID:26198910

  8. Research Note: The consequences of different methods for handling missing network data in Stochastic Actor Based Models

    PubMed Central

    Hipp, John R.; Wang, Cheng; Butts, Carter T.; Jose, Rupa; Lakon, Cynthia M.

    2015-01-01

    Although stochastic actor based models (e.g., as implemented in the SIENA software program) are growing in popularity as a technique for estimating longitudinal network data, a relatively understudied issue is the consequence of missing network data for longitudinal analysis. We explore this issue in our research note by utilizing data from four schools in an existing dataset (the AddHealth dataset) over three time points, assessing the substantive consequences of using four different strategies for addressing missing network data. The results indicate that whereas some measures in such models are estimated relatively robustly regardless of the strategy chosen for addressing missing network data, some of the substantive conclusions will differ based on the missing data strategy chosen. These results have important implications for this burgeoning applied research area, implying that researchers should more carefully consider how they address missing data when estimating such models. PMID:25745276

  9. Bus-based park-and-ride system: a stochastic model on multimodal network with congestion pricing schemes

    NASA Astrophysics Data System (ADS)

    Liu, Zhiyuan; Meng, Qiang

    2014-05-01

    This paper focuses on modelling the network flow equilibrium problem on a multimodal transport network with bus-based park-and-ride (P&R) system and congestion pricing charges. The multimodal network has three travel modes: auto mode, transit mode and P&R mode. A continuously distributed value-of-time is assumed to convert toll charges and transit fares to time unit, and the users' route choice behaviour is assumed to follow the probit-based stochastic user equilibrium principle with elastic demand. These two assumptions have caused randomness to the users' generalised travel times on the multimodal network. A comprehensive network framework is first defined for the flow equilibrium problem with consideration of interactions between auto flows and transit (bus) flows. Then, a fixed-point model with unique solution is proposed for the equilibrium flows, which can be solved by a convergent cost averaging method. Finally, the proposed methodology is tested by a network example.

  10. A stochastic agent-based model of pathogen propagation in dynamic multi-relational social networks

    PubMed Central

    Khan, Bilal; Dombrowski, Kirk; Saad, Mohamed

    2015-01-01

    We describe a general framework for modeling and stochastic simulation of epidemics in realistic dynamic social networks, which incorporates heterogeneity in the types of individuals, types of interconnecting risk-bearing relationships, and types of pathogens transmitted across them. Dynamism is supported through arrival and departure processes, continuous restructuring of risk relationships, and changes to pathogen infectiousness, as mandated by natural history; dynamism is regulated through constraints on the local agency of individual nodes and their risk behaviors, while simulation trajectories are validated using system-wide metrics. To illustrate its utility, we present a case study that applies the proposed framework towards a simulation of HIV in artificial networks of intravenous drug users (IDUs) modeled using data collected in the Social Factors for HIV Risk survey. PMID:25859056

  11. Variance-based sensitivity indices for stochastic models with correlated inputs

    SciTech Connect

    Kala, Zdeněk

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  12. Stochastic extension of cellular manufacturing systems: a queuing-based analysis

    NASA Astrophysics Data System (ADS)

    Fardis, Fatemeh; Zandi, Afagh; Ghezavati, Vahidreza

    2013-07-01

    Clustering parts and machines into part families and machine cells is a major decision in the design of cellular manufacturing systems which is defined as cell formation. This paper presents a non-linear mixed integer programming model to design cellular manufacturing systems which assumes that the arrival rate of parts into cells and machine service rate are stochastic parameters and described by exponential distribution. Uncertain situations may create a queue behind each machine; therefore, we will consider the average waiting time of parts behind each machine in order to have an efficient system. The objective function will minimize summation of idleness cost of machines, sub-contracting cost for exceptional parts, non-utilizing machine cost, and holding cost of parts in the cells. Finally, the linearized model will be solved by the Cplex solver of GAMS, and sensitivity analysis will be performed to illustrate the effectiveness of the parameters.

  13. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    PubMed Central

    Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876

  14. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    PubMed

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876

  15. Universal fuzzy integral sliding-mode controllers for stochastic nonlinear systems.

    PubMed

    Gao, Qing; Liu, Lu; Feng, Gang; Wang, Yong

    2014-12-01

    In this paper, the universal integral sliding-mode controller problem for the general stochastic nonlinear systems modeled by Itô type stochastic differential equations is investigated. One of the main contributions is that a novel dynamic integral sliding mode control (DISMC) scheme is developed for stochastic nonlinear systems based on their stochastic T-S fuzzy approximation models. The key advantage of the proposed DISMC scheme is that two very restrictive assumptions in most existing ISMC approaches to stochastic fuzzy systems have been removed. Based on the stochastic Lyapunov theory, it is shown that the closed-loop control system trajectories are kept on the integral sliding surface almost surely since the initial time, and moreover, the stochastic stability of the sliding motion can be guaranteed in terms of linear matrix inequalities. Another main contribution is that the results of universal fuzzy integral sliding-mode controllers for two classes of stochastic nonlinear systems, along with constructive procedures to obtain the universal fuzzy integral sliding-mode controllers, are provided, respectively. Simulation results from an inverted pendulum example are presented to illustrate the advantages and effectiveness of the proposed approaches. PMID:24718584

  16. An accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems using gradient-based diffusion and tau-leaping

    PubMed Central

    Koh, Wonryull; Blackwell, Kim T.

    2011-01-01

    Stochastic simulation of reaction–diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies. PMID:21513371

  17. On impulsive integrated pest management models with stochastic effects

    PubMed Central

    Akman, Olcay; Comar, Timothy D.; Hrozencik, Daniel

    2015-01-01

    We extend existing impulsive differential equation models for integrated pest management (IPM) by including stage structure for both predator and prey as well as by adding stochastic elements in the birth rate of the prey. Based on our model, we propose an approach that incorporates various competing stochastic components. This approach enables us to select a model with optimally determined weights for maximum accuracy and precision in parameter estimation. This is significant in the case of IPM because the proposed model accommodates varying unknown environmental and climatic conditions, which affect the resources needed for pest eradication. PMID:25954144

  18. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  19. Stochastic cooling

    SciTech Connect

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron.

  20. Stochastic fractal-based models of heterogeneity in subsurface hydrology: Origins, applications, limitations, and future research questions

    NASA Astrophysics Data System (ADS)

    Molz, Fred J.; Rajaram, Harihar; Lu, Silong

    2004-03-01

    Modern measurement techniques have shown that property distributions in natural porous and fractured media appear highly irregular and nonstationary in a spatial statistical sense. This implies that direct statistical analyses of the property distributions are not appropriate, because the statistical measures developed will be dependent on position and therefore will be nonunique. An alternative, which has been explored to an increasing degree during the past 20 years, is to consider the class of functions known as nonstationary stochastic processes with spatially stationary increments. When such increment distributions are described by probability density functions (PDFs) of the Gaussian, Levy, or gamma class or PDFs that converge to one of these classes under additions, then one is also dealing with a so-called stochastic fractal, the mathematical theory of which was developed during the first half of the last century. The scaling property associated with such fractals is called self-affinity, which is more general that geometric self-similarity. Herein we review the application of Gaussian and Levy stochastic fractals and multifractals in subsurface hydrology, mainly to porosity, hydraulic conductivity, and fracture roughness, along with the characteristics of flow and transport in such fields. Included are the development and application of fractal and multifractal concepts; a review of the measurement techniques, such as the borehole flowmeter and gas minipermeameter, that are motivating the use of fractal-based theories; the idea of a spatial weighting function associated with a measuring instrument; how fractal fields are generated; and descriptions of the topography and aperture distributions of self-affine fractures. In a somewhat different vein the last part of the review deals with fractal- and fragmentation-based descriptions of fracture networks and the implications for transport in such networks. Broad conclusions include the implication that models

  1. Debris-flow risk analysis in a managed torrent based on a stochastic life-cycle performance.

    PubMed

    Ballesteros Cánovas, J A; Stoffel, M; Corona, C; Schraml, K; Gobiet, A; Tani, S; Sinabell, F; Fuchs, S; Kaitna, R

    2016-07-01

    Two key factors can affect the functional ability of protection structures in mountains torrents, namely (i) infrastructure maintenance of existing infrastructures (as a majority of existing works is in the second half of their life cycle), and (ii) changes in debris-flow activity as a result of ongoing and expected future climatic changes. Here, we explore the applicability of a stochastic life-cycle performance to assess debris-flow risk in the heavily managed Wartschenbach torrent (Lienz region, Austria) and to quantify associated, expected economic losses. We do so by considering maintenance costs to restore infrastructure in the aftermath of debris-flow events as well as by assessing the probability of check dam failure (e.g., as a result of overload). Our analysis comprises two different management strategies as well as three scenarios defining future changes in debris-flow activity resulting from climatic changes. At the study site, an average debris-flow frequency of 21 events per decade was observed for the period 1950-2000; activity at the site is projected to change by +38% to -33%, according to the climate scenario used. Comparison of the different management alternatives suggests that the current mitigation strategy will allow to reduce expected damage to infrastructure and population almost fully (89%). However, to guarantee a comparable level of safety, maintenance costs is expected to increase by 57-63%, with an increase of maintenance costs by ca. 50% for each intervention. Our analysis therefore also highlights the importance of taking maintenance costs into account for risk assessments realized in managed torrent systems, as they result both from progressive and event-related deteriorations. We conclude that the stochastic life-cycle performance adopted in this study represents indeed an integrated approach to assess the long-term effects and costs of prevention structures in managed torrents. PMID:26994802

  2. Observer-based adaptive neural dynamic surface control for a class of non-strict-feedback stochastic nonlinear systems

    NASA Astrophysics Data System (ADS)

    Yu, Zhaoxu; Li, Shugang; Li, Fangfei

    2016-01-01

    The problem of adaptive output feedback stabilisation is addressed for a more general class of non-strict-feedback stochastic nonlinear systems in this paper. The neural network (NN) approximation and the variable separation technique are utilised to deal with the unknown subsystem functions with the whole states. Based on the design of a simple input-driven observer, an adaptive NN output feedback controller which contains only one parameter to be updated is developed for such systems by using the dynamic surface control method. The proposed control scheme ensures that all signals in the closed-loop systems are bounded in probability and the error signals remain semi-globally uniformly ultimately bounded in fourth moment (or mean square). Two simulation examples are given to illustrate the effectiveness of the proposed control design.

  3. Evaluation of stochastic reservoir operation optimization models

    NASA Astrophysics Data System (ADS)

    Celeste, Alcigeimes B.; Billib, Max

    2009-09-01

    This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.

  4. Dike Strength Analysis on a Regional Scale Based On a Stochastic Subsoil Model

    NASA Astrophysics Data System (ADS)

    Koelewijn, A. R.; Vastenburg, E. W.

    2013-12-01

    [Knoeff et al. 2011] J.G. Knoeff, E.W. Vastenburg, G.A. van den Ham & J. Lopez de la Cruz, Automated levee flood risk management, 5th Int. Conf. on Flood Management, Tokyo, 2011 [Koelewijn et al. 2011] A.R. Koelewijn, G.A.M. Kruse & J. Pruiksma, Stochastic subsoil modelling - first set-up of the model and reliability analyses, report 12042000-002-HYE-0001, Deltares, Delft, 2011 [In Dutch] [Lam et al. 2013] K.S. Lam, P.W. Gill & L.W.A. Zwang, Implementation of new levee strength modules for continuous safety assessments, Comprehensive Flood Risk Managment, Taylor & Francis, London, 2013, 317-326. Dike sections with stochastic subsoil profiles.

  5. Stochastic optimization algorithm selection in hydrological model calibration based on fitness landscape characterization

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Brissette, François P.; Poulin, Annie; Côté, Pascal; Martel, Jean-Luc

    2014-05-01

    The process of hydrological model parameter calibration is routinely performed with the help of stochastic optimization algorithms. Many such algorithms have been created and they sometimes provide varying levels of performance (as measured by an efficiency metric such as Nash-Sutcliffe). This is because each algorithm is better suited for one type of optimization problem rather than another. This research project's aim was twofold. First, it was sought upon to find various features in the calibration problem fitness landscapes to map the encountered problem types to the best possible optimization algorithm. Second, the optimal number of model evaluations in order to minimize resources usage and maximize overall model quality was investigated. A total of five stochastic optimization algorithms (SCE-UA, CMAES, DDS, PSO and ASA) were used to calibrate four lumped hydrological models (GR4J, HSAMI, HMETS and MOHYSE) on 421 basins from the US MOPEX database. Each of these combinations was performed using three objective functions (Log(RMSE), NSE, and a metric combining NSE, RMSE and BIAS) to add sufficient diversity to the fitness landscapes. Each run was performed 30 times for statistical analysis. With every parameter set tested during the calibration process, the validation value was taken on a separate period. It was then possible to outline the calibration skill versus the validation skill for the different algorithms. Fitness landscapes were characterized by various metrics, such as the dispersion metric, the mean distance between random points and their respective local minima (found through simple hill-climbing algorithms) and the mean distance between the local minima and the best local optimum found. These metrics were then compared to the calibration score of the various optimization algorithms. Preliminary results tend to show that fitness landscapes presenting a globally convergent structure are more prevalent than other types of landscapes in this

  6. Estimation of Listeria monocytogenes and Escherichia coli O157:H7 prevalence and levels in naturally contaminated rocket and cucumber samples by deterministic and stochastic approaches.

    PubMed

    Hadjilouka, Agni; Mantzourani, Kyriaki-Sofia; Katsarou, Anastasia; Cavaiuolo, Marina; Ferrante, Antonio; Paramithiotis, Spiros; Mataragas, Marios; Drosinos, Eleftherios H

    2015-02-01

    The aims of the present study were to determine the prevalence and levels of Listeria monocytogenes and Escherichia coli O157:H7 in rocket and cucumber samples by deterministic (estimation of a single value) and stochastic (estimation of a range of values) approaches. In parallel, the chromogenic media commonly used for the recovery of these microorganisms were evaluated and compared, and the efficiency of an enzyme-linked immunosorbent assay (ELISA)-based protocol was validated. L. monocytogenes and E. coli O157:H7 were detected and enumerated using agar Listeria according to Ottaviani and Agosti plus RAPID' L. mono medium and Fluorocult plus sorbitol MacConkey medium with cefixime and tellurite in parallel, respectively. Identity was confirmed with biochemical and molecular tests and the ELISA. Performance indices of the media and the prevalence of both pathogens were estimated using Bayesian inference. In rocket, prevalence of both L. monocytogenes and E. coli O157:H7 was estimated at 7% (7 of 100 samples). In cucumber, prevalence was 6% (6 of 100 samples) and 3% (3 of 100 samples) for L. monocytogenes and E. coli O157:H7, respectively. The levels derived from the presence-absence data using Bayesian modeling were estimated at 0.12 CFU/25 g (0.06 to 0.20) and 0.09 CFU/25 g (0.04 to 0.170) for L. monocytogenes in rocket and cucumber samples, respectively. The corresponding values for E. coli O157:H7 were 0.59 CFU/25 g (0.43 to 0.78) and 1.78 CFU/25 g (1.38 to 2.24), respectively. The sensitivity and specificity of the culture media differed for rocket and cucumber samples. The ELISA technique had a high level of cross-reactivity. Parallel testing with at least two culture media was required to achieve a reliable result for L. monocytogenes or E. coli O157:H7 prevalence in rocket and cucumber samples. PMID:25710146

  7. Sucrose quantitative and qualitative analysis from tastant mixtures based on Cu foam electrode and stochastic resonance.

    PubMed

    Hui, Guohua; Zhang, Jianfeng; Li, Jian; Zheng, Le

    2016-04-15

    Quantitative and qualitative determination of sucrose from complex tastant mixtures using Cu foam electrode was investigated in this study. Cu foam was prepared and its three-dimensional (3-D) mesh structure was characterized by scanning electron microscopy (SEM). Cu foam was utilized as working electrode in three-electrode electrochemical system. Cyclic voltammetry (CV) scanning results exhibited the oxidation procedure of sucrose on Cu foam electrode. Amperometric i-t scanning results indicated that Cu foam electrode selectively responded to sucrose from four tastant mixtures with low limit of detection (LOD) of 35.34 μM, 49.85 μM, 45.89 μM, and 26.81 μM, respectively. The existence of quinine, NaCl, citric acid (CA) and their mixtures had no effect on sucrose detection. Furthermore, mixtures containing different tastants could be discriminated by non-linear double-layered cascaded series stochastic resonance (DCSSR) output signal-to-noise ratio (SNR) eigen peak parameters of CV measurement data. The proposed method provides a promising way for sweetener analysis of commercial food. PMID:26675854

  8. A graph-based N-body approximation with application to stochastic neighbor embedding.

    PubMed

    Parviainen, Eli

    2016-03-01

    We propose a novel approximation technique, bubble approximation (BA), for repulsion forces in an N-body problem, where attraction has a limited range and repulsion acts between all points. These kinds of systems occur frequently in dimension reduction and graph drawing. Like tree codes, the established N-body approximation method, BA replaces several point-to-point computations by one area-to-point computation. Novelty of BA is to consider not only the magnitudes but also the directions of forces from the area. Therefore, its area-to-point approximations are applicable anywhere in the space. The joint effect of forces from inside the area is calculated analytically, assuming a homogeneous mass of points inside the area. These two features free BA from hierarchical data structures and complicated bookkeeping of interactions, which plague tree codes. Instead, BA uses a simple graph to control the computations. The graph provides a sparse matrix, which, suitably weighted, replaces the full matrix of pairwise comparisons in the N-body problem. As a concrete example, we implement a sparse-matrix version of stochastic neighbor embedding (a dimension reduction method), and demonstrate its good performance by comparisons to full-matrix method, and to three different approximate versions of the same method. PMID:26690681

  9. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  10. Validation of Individual-Based Markov-Like Stochastic Process Model of Insect Behavior and a “Virtual Farm” Concept for Enhancement of Site-Specific IPM

    PubMed Central

    Lux, Slawomir A.; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin

    2016-01-01

    The paper reports application of a Markov-like stochastic process agent-based model and a “virtual farm” concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a “bottom-up ethological” approach and emulates behavior of the “primary IPM actors”—large cohorts of individual insects—within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the “virtual farm” approach—were discussed. PMID:27602000

  11. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  12. A multi-sensor RSS spatial sensing-based robust stochastic optimization algorithm for enhanced wireless tethering.

    PubMed

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  13. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    PubMed Central

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  14. Stochastic entrainment of a stochastic oscillator.

    PubMed

    Wang, Guanyu; Peskin, Charles S

    2015-11-01

    In this work, we consider a stochastic oscillator described by a discrete-state continuous-time Markov chain, in which the states are arranged in a circle, and there is a constant probability per unit time of jumping from one state to the next in a specified direction around the circle. At each of a sequence of equally spaced times, the oscillator has a specified probability of being reset to a particular state. The focus of this work is the entrainment of the oscillator by this periodic but stochastic stimulus. We consider a distinguished limit, in which (i) the number of states of the oscillator approaches infinity, as does the probability per unit time of jumping from one state to the next, so that the natural mean period of the oscillator remains constant, (ii) the resetting probability approaches zero, and (iii) the period of the resetting signal approaches a multiple, by a ratio of small integers, of the natural mean period of the oscillator. In this distinguished limit, we use analytic and numerical methods to study the extent to which entrainment occurs. PMID:26651734

  15. Aquifer Structure Identification Using Stochastic Inversion

    SciTech Connect

    Harp, Dylan R; Dai, Zhenxue; Wolfsberg, Andrew V; Vrugt, Jasper A

    2008-01-01

    This study presents a stochastic inverse method for aquifer structure identification using sparse geophysical and hydraulic response data. The method is based on updating structure parameters from a transition probability model to iteratively modify the aquifer structure and parameter zonation. The method is extended to the adaptive parameterization of facies hydraulic parameters by including these parameters as optimization variables. The stochastic nature of the statistical structure parameters leads to nonconvex objective functions. A multi-method genetically adaptive evolutionary approach (AMALGAM-SO) was selected to perform the inversion given its search capabilities. Results are obtained as a probabilistic assessment of facies distribution based on indicator cokriging simulation of the optimized structural parameters. The method is illustrated by estimating the structure and facies hydraulic parameters of a synthetic example with a transient hydraulic response.

  16. A subgrid based approach for morphodynamic modelling

    NASA Astrophysics Data System (ADS)

    Volp, N. D.; van Prooijen, B. C.; Pietrzak, J. D.; Stelling, G. S.

    2016-07-01

    To improve the accuracy and the efficiency of morphodynamic simulations, we present a subgrid based approach for a morphodynamic model. This approach is well suited for areas characterized by sub-critical flow, like in estuaries, coastal areas and in low land rivers. This new method uses a different grid resolution to compute the hydrodynamics and the morphodynamics. The hydrodynamic computations are carried out with a subgrid based, two-dimensional, depth-averaged model. This model uses a coarse computational grid in combination with a subgrid. The subgrid contains high resolution bathymetry and roughness information to compute volumes, friction and advection. The morphodynamic computations are carried out entirely on a high resolution grid, the bed grid. It is key to find a link between the information defined on the different grids in order to guaranty the feedback between the hydrodynamics and the morphodynamics. This link is made by using a new physics-based interpolation method. The method interpolates water levels and velocities from the coarse grid to the high resolution bed grid. The morphodynamic solution improves significantly when using the subgrid based method compared to a full coarse grid approach. The Exner equation is discretised with an upwind method based on the direction of the bed celerity. This ensures a stable solution for the Exner equation. By means of three examples, it is shown that the subgrid based approach offers a significant improvement at a minimal computational cost.

  17. Stochastic stage-structured modeling of the adaptive immune system

    SciTech Connect

    Chao, D. L.; Davenport, M. P.; Forrest, S.; Perelson, Alan S.,

    2003-01-01

    We have constructed a computer model of the cytotoxic T lymphocyte (CTL) response to antigen and the maintenance of immunological memory. Because immune responses often begin with small numbers of cells and there is great variation among individual immune systems, we have chosen to implement a stochastic model that captures the life cycle of T cells more faithfully than deterministic models. Past models of the immune response have been differential equation based, which do not capture stochastic effects, or agent-based, which are computationally expensive. We use a stochastic stage-structured approach that has many of the advantages of agent-based modeling but is more efficient. Our model can provide insights into the effect infections have on the CTL repertoire and the response to subsequent infections.

  18. Sensitivity Analysis and Stochastic Simulations of Non-equilibrium Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2009-11-05

    We study parametric uncertainties involved in plasma flows and apply stochastic sensitivity analysis to rank the importance of all inputs to guide large-scale stochastic simulations. Specifically, we employ different gradient-based sensitivity methods, namely Morris, multi-element probabilistic collocation method (ME-PCM) on sparse grids, Quasi-Monte Carlo, and Monte Carlo methods. These approaches go beyond the standard ``One-At-a-Time" sensitivity analysis and provide a measure of the nonlinear interaction effects for the uncertain inputs. The objective is to perform systematic stochastic simulations of plasma flows treating only as {\\em stochastic processes} the inputs with the highest sensitivity index, hence reducing substantially the computational cost. Two plasma flow examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis. The first one is a two-fluid model in a shock tube while the second one is a one-fluid/two-temperature model in flow past a cylinder.

  19. Investigating the intermittency of turbulence in the stable atmospheric boundary layer - a field data and stochastic modeling approach

    NASA Astrophysics Data System (ADS)

    Vercauteren, N.; Von Larcher, T. G.; Bou-Zeid, E.; Klein, R.; Parlange, M. B.

    2013-12-01

    Intermittent turbulence is a common feature of stably stratified atmospheric flows, yet its modeling is still problematic. Mesoscale motions such as gravity waves, Kelvin Helmholtz instabilities or density currents may trigger intermittent turbulence and greatly complicate the modeling and measurements of the stable boundary layer (SBL). In this study we investigate the intermittency of turbulence in very stable conditions by applying new statistical analysis tools to the existing SnoHATS dataset, collected in Switzerland over the Glacier de la Plaine Morte in 2006. These tools could then be used to develop stochastic parameterization for the SBL for use in weather or climate models. The SnoHATS dataset includes measurements of atmospheric turbulence collected by horizontal arrays of sonic anemometers. This study applies timeseries analysis tools developed for meteorological data to analyze the SnoHATS dataset with a new perspective. Turbulence in very stable conditions exhibits intermittency, and there is interplay between larger scale atmospheric flow features (at the so-called submesoscales) and onset of turbulence. We investigate the use of statistical tools such as hidden Markov models (HMM) and nonstationary multivariate autoregressive factor models (VARX) as a way to define the interactions between lower frequency modes and turbulence modes. The statistical techniques allow for separation of the data according to metastable states, such as quiet and turbulent periods in the stratified atmosphere.

  20. New Approaches to Rainfall and Flood Frequency Analysis Using High Resolution Radar Rainfall Fields and Stochastic Storm Transposition

    NASA Astrophysics Data System (ADS)

    Wright, D. B.; Smith, J. A.; Villarini, G.; Baeck, M. L.

    2012-12-01

    Conventional techniques for rainfall and flood frequency analysis in small watersheds involve a variety of assumptions regarding the spatial and temporal structure of extreme rainfall systems as well as how resulting runoff moves through the drainage network. These techniques were developed at a time when observational and computational resources were limited. They continue to be used in practice though their validity has not been fully examined. New observational and computational resources such as high-resolution radar rainfall estimates and distributed hydrologic models allow us to examine these assumptions and to develop alternative methods for estimating flood risk. We have developed a high-resolution (1 square km, 15-minute resolution) radar rainfall dataset for the 2001-2010 period using the Hydro-NEXRAD processing system, which has been bias corrected using a dense network of 71 rain gages in the Charlotte metropolitan area. The accuracy of the bias-corrected radar rainfall estimates compare favorably with rain gage measurements. The radar rainfall dataset is used in a stochastic storm transposition framework to estimate the frequency of extreme rainfall for urban watersheds ranging the point/radar pixel scale up to 240 square km, and can be combined with the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model to estimate flood frequency analysis. The results of these frequency analyses can be compared against the results of conventional methods such as the NOAA Atlas 14 precipitation frequency estimates and peak discharge estimates prepared by FEMA and the North Carolina state government.

  1. Structural damage detection based on stochastic subspace identification and statistical pattern recognition: II. Experimental validation under varying temperature

    NASA Astrophysics Data System (ADS)

    Lin, Y. Q.; Ren, W. X.; Fang, S. E.

    2011-11-01

    Although most vibration-based damage detection methods can acquire satisfactory verification on analytical or numerical structures, most of them may encounter problems when applied to real-world structures under varying environments. The damage detection methods that directly extract damage features from the periodically sampled dynamic time history response measurements are desirable but relevant research and field application verification are still lacking. In this second part of a two-part paper, the robustness and performance of the statistics-based damage index using the forward innovation model by stochastic subspace identification of a vibrating structure proposed in the first part have been investigated against two prestressed reinforced concrete (RC) beams tested in the laboratory and a full-scale RC arch bridge tested in the field under varying environments. Experimental verification is focused on temperature effects. It is demonstrated that the proposed statistics-based damage index is insensitive to temperature variations but sensitive to the structural deterioration or state alteration. This makes it possible to detect the structural damage for the real-scale structures experiencing ambient excitations and varying environmental conditions.

  2. Physics-based approach to haptic display

    NASA Technical Reports Server (NTRS)

    Brown, J. Michael; Colgate, J. Edward

    1994-01-01

    This paper addresses the implementation of complex multiple degree of freedom virtual environments for haptic display. We suggest that a physics based approach to rigid body simulation is appropriate for hand tool simulation, but that currently available simulation techniques are not sufficient to guarantee successful implementation. We discuss the desirable features of a virtual environment simulation, specifically highlighting the importance of stability guarantees.

  3. How Initial Prevalence Moderates Network-based Smoking Change: Estimating Contextual Effects with Stochastic Actor-based Models.

    PubMed

    Adams, Jimi; Schaefer, David R

    2016-03-01

    We use an empirically grounded simulation model to examine how initial smoking prevalence moderates the effectiveness of potential interventions designed to change adolescent smoking behavior. Our model investigates the differences that result when manipulating peer influence and smoker popularity as intervention levers. We demonstrate how a simulation-based approach allows us to estimate outcomes that arise (1) when intervention effects could plausibly alter peer influence and/or smoker popularity effects and (2) across a sample of schools that match the range of initial conditions of smoking prevalence in U.S. schools. We show how these different initial conditions combined with the exact same intervention effects can produce substantially different outcomes-for example, effects that produce smoking declines in some settings can actually increase smoking in others. We explore the form and magnitude of these differences. Our model also provides a template to evaluate the potential effects of alternative intervention scenarios. PMID:26957133

  4. Advanced Approach of Multiagent Based Buoy Communication

    PubMed Central

    Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej

    2015-01-01

    Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197

  5. Advanced Approach of Multiagent Based Buoy Communication.

    PubMed

    Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej

    2015-01-01

    Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197

  6. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  7. Bayesian and Profile Likelihood Approaches to Time Delay Estimation for Stochastic Time Series of Gravitationally Lensed Quasars

    NASA Astrophysics Data System (ADS)

    Tak, Hyungsuk; Mandel, Kaisey; van Dyk, David A.; Kashyap, Vinay; Meng, Xiao-Li; Siemiginowska, Aneta

    2016-01-01

    The gravitational field of a galaxy can act as a lens and deflect the light emitted by a more distant object such as a quasar. If the galaxy is a strong gravitational lens, it can produce multiple images of the same quasar in the sky. Since the light in each gravitationally lensed image traverses a different path length and gravitational potential from the quasar to the Earth, fluctuations in the source brightness are observed in the several images at different times. We infer the time delay between these fluctuations in the brightness time series data of each image, which can be used to constrain cosmological parameters. Our model is based on a state-space representation for irregularly observed time series data generated from a latent continuous-time Ornstein-Uhlenbeck process. We account for microlensing variations via a polynomial regression in the model. Our Bayesian strategy adopts scientifically motivated hyper-prior distributions and a Metropolis-Hastings within Gibbs sampler. We improve the sampler by using an ancillarity-sufficiency interweaving strategy, and adaptive Markov chain Monte Carlo. We introduce a profile likelihood of the time delay as an approximation to the marginal posterior distribution of the time delay. The Bayesian and profile likelihood approaches complement each other, producing almost identical results; the Bayesian method is more principled but the profile likelihood is faster and simpler to implement. We demonstrate our estimation strategy using simulated data of doubly- and quadruply-lensed quasars from the Time Delay Challenge, and observed data of quasars Q0957+561 and J1029+2623.

  8. Stochastic variational method as quantization scheme: Field quantization of the complex Klein-Gordon equation

    NASA Astrophysics Data System (ADS)

    Koide, T.; Kodama, T.

    2015-09-01

    The stochastic variational method (SVM) is the generalization of the variational approach to systems described by stochastic variables. In this paper, we investigate the applicability of SVM as an alternative field-quantization scheme, by considering the complex Klein-Gordon equation. There, the Euler-Lagrangian equation for the stochastic field variables leads to the functional Schrödinger equation, which can be interpreted as the Euler (ideal fluid) equation in the functional space. The present formulation is a quantization scheme based on commutable variables, so that there appears no ambiguity associated with the ordering of operators, e.g., in the definition of Noether charges.

  9. On Input-to-State Stability of Switched Stochastic Nonlinear Systems Under Extended Asynchronous Switching.

    PubMed

    Kang, Yu; Zhai, Di-Hua; Liu, Guo-Ping; Zhao, Yun-Bo

    2016-05-01

    An extended asynchronous switching model is investigated for a class of switched stochastic nonlinear retarded systems in the presence of both detection delay and false alarm, where the extended asynchronous switching is described by two independent and exponentially distributed stochastic processes, and further simplified as Markovian. Based on the Razumikhin-type theorem incorporated with average dwell-time approach, the sufficient criteria for global asymptotic stability in probability and stochastic input-to-state stability are given, whose importance and effectiveness are finally verified by numerical examples. PMID:26068932

  10. Stochastic calculus in physics

    SciTech Connect

    Fox, R.F.

    1987-03-01

    The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations.

  11. Control design for discrete-time state-multiplicative noise stochastic systems

    NASA Astrophysics Data System (ADS)

    Krokavec, Dušan; Filasová, Anna

    2015-11-01

    Design conditions for existence of the H∞ linear state feedback control for discretetime stochastic systems with state-multiplicative noise and polytopic uncertainties are presented in the paper. Using an enhanced form of the bounded real lemma for discrete-time stochastic systems with state-multiplicative noise, the LMI-based procedure is provided for computation of the gains of linear, as well as nonlinear, state control law. The approach is illustrated on an example demonstrating the validity of the proposed method.

  12. Interval-parameter semi-infinite fuzzy-stochastic mixed-integer programming approach for environmental management under multiple uncertainties

    SciTech Connect

    Guo, P.; Huang, G.H.

    2010-03-15

    In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their

  13. Stochastic Phase Resetting: a Theory for Deep Brain Stimulation

    NASA Astrophysics Data System (ADS)

    Tass, Peter A.

    2000-03-01

    A stochastic approach to phase resetting in clusters of interacting oscillators is presented. This theory explains how a stimulus, especially a single pulse, induces synchronization and desynchronization processes. The theory is used to design a new technique for deep brain stimulation in patients suffering from Parkinson's disease or essential tremor that do no longer respond to drug therapy. This stimulation mode is a feedback controlled single pulse stimulation. The feedback signal is registered with the deep brain electrode, and the desynchronizing pulses are administered via the same electrode. The stochastic phase resetting theory is used as a starting point of a model based design of intelligent and gentle deep brain stimulation techniques.

  14. Stochastic Modeling of Behavioral Response to Anthropogenic Sounds.

    PubMed

    Frankel, Adam S; Ellison, William T; Vigness-Raposa, Kathleen J; Giard, Jennifer L; Southall, Brandon L

    2016-01-01

    The effect of anthropogenic sounds on marine wildlife is typically assessed by convolving the spatial, temporal, and spectral properties of a modeled sound field with a representation of animal distribution within the field. Both components benefit from stochastic modeling techniques based on field observations. Recent studies have also highlighted the effect of context on the probability and severity of the animal behavioral response to sound. This paper extends the stochastic approach to three modeling scenarios, including key contextual variables in aversion from a given level of sound and as a means of evaluating the effectiveness of passive acoustic monitoring. PMID:26610975

  15. Stochastic model for protein flexibility analysis

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Wei, Guo-Wei

    2013-12-01

    Protein flexibility is an intrinsic property and plays a fundamental role in protein functions. Computational analysis of protein flexibility is crucial to protein function prediction, macromolecular flexible docking, and rational drug design. Most current approaches for protein flexibility analysis are based on Hamiltonian mechanics. We introduce a stochastic model to study protein flexibility. The essential idea is to analyze the free induction decay of a perturbed protein structural probability, which satisfies the master equation. The transition probability matrix is constructed by using probability density estimators including monotonically decreasing radial basis functions. We show that the proposed stochastic model gives rise to some of the best predictions of Debye-Waller factors or B factors for three sets of protein data introduced in the literature.

  16. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning.

    PubMed

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-01-01

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses. PMID:27405788

  17. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning

    PubMed Central

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-01-01

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses. PMID:27405788

  18. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning

    NASA Astrophysics Data System (ADS)

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-07-01

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.

  19. Determining design gust loads for nonlinear aircraft similarity between methods based on matched filter theory and on stochastic simulation

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1992-01-01

    This is a work-in-progress paper. It explores the similarity between the results from two different analysis methods - one deterministic, the other stochastic - for computing maximized and time-correlated gust loads for nonlinear aircraft. To date, numerical studies have been performed using two different nonlinear aircraft configurations. These studies demonstrate that results from the deterministic analysis method are realizable in the stochastic analysis method.

  20. Stochastic Control of Pharmacokinetic Systems

    PubMed Central

    Schumitzky, Alan; Milman, Mark; Katz, Darryl; D'Argenio, David Z.; Jelliffe, Roger W.

    1983-01-01

    The application of stochastic control theory to the clinical problem of designing a dosage regimen for a pharmacokinetic system is considered. This involves defining a patient-dependent pharmacokinetic model and a clinically appropriate therapeutic goal. Most investigators have attacked the dosage regimen problem by first estimating the values of the patient's unknown model parameters and then controlling the system as if those parameter estimates were in fact the true values. We have developed an alternative approach utilizing stochastic control theory in which the estimation and control phases of the problem are not separated. Mathematical results are given which show that this approach yields significant potential improvement in attaining, for example, therapeutic serum level goals over methods in which estimation and control are separated. Finally, a computer simulation is given for the optimal stochastic control of an aminoglycoside regimen which shows that this approach is feasible for practical applications.

  1. Stochastic Satbility and Performance Robustness of Linear Multivariable Systems

    NASA Technical Reports Server (NTRS)

    Ryan, Laurie E.; Stengel, Robert F.

    1990-01-01

    Stochastic robustness, a simple technique used to estimate the robustness of linear, time invariant systems, is applied to a single-link robot arm control system. Concepts behind stochastic stability robustness are extended to systems with estimators and to stochastic performance robustness. Stochastic performance robustness measures based on classical design specifications are introduced, and the relationship between stochastic robustness measures and control system design parameters are discussed. The application of stochastic performance robustness, and the relationship between performance objectives and design parameters are demonstrated by means of example. The results prove stochastic robustness to be a good overall robustness analysis method that can relate robustness characteristics to control system design parameters.

  2. Stochastic analysis of motor-control stability, polymer based force sensing, and optical stimulation as a preventive measure for falls

    NASA Astrophysics Data System (ADS)

    Landrock, Clinton K.

    Falls are the leading cause of all external injuries. Outcomes of falls include the leading cause of traumatic brain injury and bone fractures, and high direct medical costs in the billions of dollars. This work focused on developing three areas of enabling component technology to be used in postural control monitoring tools targeting the mitigation of falls. The first was an analysis tool based on stochastic fractal analysis to reliably measure levels of motor control. The second focus was on thin film wearable pressure sensors capable of relaying data for the first tool. The third was new thin film advanced optics for improving phototherapy devices targeting postural control disorders. Two populations, athletes and elderly, were studied against control groups. The results of these studies clearly show that monitoring postural stability in at-risk groups can be achieved reliably, and an integrated wearable system can be envisioned for both monitoring and treatment purposes. Keywords: electro-active polymer, ionic polymer-metal composite, postural control, motor control, fall prevention, sports medicine, fractal analysis, physiological signals, wearable sensors, phototherapy, photobiomodulation, nano-optics.

  3. Nested Sparse Approximation: Structured Estimation of V2V Channels Using Geometry-Based Stochastic Channel Model

    NASA Astrophysics Data System (ADS)

    Beygi, Sajjad; Mitra, Urbashi; Strom, Erik G.

    2015-09-01

    Future intelligent transportation systems promise increased safety and energy efficiency. Realization of such systems will require vehicle-to-vehicle (V2V) communication technology. High fidelity V2V communication is, in turn, dependent on accurate V2V channel estimation. V2V channels have characteristics differing from classical cellular communication channels. Herein, geometry-based stochastic modeling is employed to develop a characterization of V2V channel channels. The resultant model exhibits significant structure; specifically, the V2V channel is characterized by three distinct regions within the delay-Doppler plane. Each region has a unique combination of specular reflections and diffuse components resulting in a particular element-wise and group-wise sparsity. This joint sparsity structure is exploited to develop a novel channel estimation algorithm. A general machinery is provided to solve the jointly element/group sparse channel (signal) estimation problem using proximity operators of a broad class of regularizers. The alternating direction method of multipliers using the proximity operator is adapted to optimize the mixed objective function. Key properties of the proposed objective functions are proven which ensure that the optimal solution is found by the new algorithm. The effects of pulse shape leakage are explicitly characterized and compensated, resulting in measurably improved performance. Numerical simulation and real V2V channel measurement data are used to evaluate the performance of the proposed method. Results show that the new method can achieve significant gains over previously proposed methods.

  4. Simulating the spread of an invasive termite in an urban environment using a stochastic individual-based model.

    PubMed

    Tonini, Francesco; Hochmair, Hartwig H; Scheffrahn, Rudolf H; Deangelis, Donald L

    2013-06-01

    Invasive termites are destructive insect pests that cause billions of dollars in property damage every year. Termite species can be transported overseas by maritime vessels. However, only if the climatic conditions are suitable will the introduced species flourish. Models predicting the areas of infestation following initial introduction of an invasive species could help regulatory agencies develop successful early detection, quarantine, or eradication efforts. At present, no model has been developed to estimate the geographic spread of a termite infestation from a set of surveyed locations. In the current study, we used actual field data as a starting point, and relevant information on termite species to develop a spatially-explicit stochastic individual-based simulation to predict areas potentially infested by an invasive termite, Nasutitermes corniger (Motschulsky), in Dania Beach, FL. The Monte Carlo technique is used to assess outcome uncertainty. A set of model realizations describing potential areas of infestation were considered in a sensitivity analysis, which showed that the model results had greatest sensitivity to number of alates released from nest, alate survival, maximum pheromone attraction distance between heterosexual pairs, and mean flight distance. Results showed that the areas predicted as infested in all simulation runs of a baseline model cover the spatial extent of all locations recently discovered. The model presented in this study could be applied to any invasive termite species after proper calibration of parameters. The simulation herein can be used by regulatory authorities to define most probable quarantine and survey zones. PMID:23726049

  5. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  6. Stochastic Cooling

    SciTech Connect

    Blaskiewicz, M.

    2011-01-01

    Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.

  7. Stochastic superparameterization in quasigeostrophic turbulence

    SciTech Connect

    Grooms, Ian; Majda, Andrew J.

    2014-08-15

    In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and

  8. Network-based stochastic competitive learning approach to disambiguation in collaborative networks.

    PubMed

    Christiano Silva, Thiago; Raphael Amancio, Diego

    2013-03-01

    Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods. PMID:23556976

  9. Effect of irrigation on the Budyko curve: a process-based stochastic approach

    NASA Astrophysics Data System (ADS)

    Vico, Giulia; Destouni, Georgia

    2015-04-01

    Currently, 40% of food production is provided by irrigated agriculture. Irrigation ensures higher and less variable yields, but such water input alters the balance of transpiration and other losses from the soil. Thus, accounting for the impact of irrigation is crucial for the understanding of the local water balance. A probabilistic model of the soil water balance is employed to explore the effects of different irrigation strategies within the Budyko framework. Shifts in the Budyko curve are explained in a mechanistic way. At the field level and assuming unlimited irrigation water, irrigation shifts the Budyko curve upward towards the upper limit imposed by energy availability, even in dry climates. At the watershed scale and assuming that irrigation water is obtained from sources within the same watershed, the application of irrigation over a fraction of the watershed area allows a more efficient use of water resources made available through precipitation. In this case, however, mean transpiration remains upper-bounded by rainfall over the whole watershed.

  10. Network-based stochastic competitive learning approach to disambiguation in collaborative networks

    NASA Astrophysics Data System (ADS)

    Christiano Silva, Thiago; Raphael Amancio, Diego

    2013-03-01

    Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods.

  11. Stochastic models of neuronal dynamics

    PubMed Central

    Harrison, L.M; David, O; Friston, K.J

    2005-01-01

    already been established for deterministic systems. The potential importance of modelling density dynamics (as opposed to more conventional neural mass models) is that they include interactions among the moments of neuronal states (e.g. the mean depolarization may depend on the variance of synaptic currents through nonlinear mechanisms). Here, we formulate a population model, based on biologically informed model-neurons with spike-rate adaptation and synaptic dynamics. Neuronal sub-populations are coupled to form an observation model, with the aim of estimating and making inferences about coupling among sub-populations using real data. We approximate the time-dependent solution of the system using a bi-orthogonal set and first-order perturbation expansion. For didactic purposes, the model is developed first in the context of deterministic input, and then extended to include stochastic effects. The approach is demonstrated using synthetic data, where model parameters are identified using a Bayesian estimation scheme we have described previously. PMID:16087449

  12. Ligand-based virtual screening and in silico design of new antimalarial compounds using nonstochastic and stochastic total and atom-type quadratic maps.

    PubMed

    Marrero-Ponce, Yovani; Iyarreta-Veitía, Maité; Montero-Torres, Alina; Romero-Zaldivar, Carlos; Brandt, Carlos A; Avila, Priscilla E; Kirchgatter, Karin; Machado, Yanetsy

    2005-01-01

    and stochastic atom-based quadratic fingerprints were 93.93% and 92.77%, respectively. The quadratic maps-based TOMOCOMD-CARDD approach implemented in this work was successfully compared with four of the most useful models for antimalarials selection reported to date. The developed models were then used in a simulation of a virtual search for Ras FTase (FTase = farnesyltransferase) inhibitors with antimalarial activity; 70% and 100% of the 10 inhibitors used in this virtual search were correctly classified, showing the ability of the models to identify new lead antimalarials. Finally, these two QSAR models were used in the identification of previously unknown antimalarials. In this sense, three synthetic intermediaries of quinolinic compounds were evaluated as active/inactive ones using the developed models. The synthesis and biological evaluation of these chemicals against two malaria strains, using chloroquine as a reference, was performed. An accuracy of 100% with the theoretical predictions was observed. Compound 3 showed antimalarial activity, being the first report of an arylaminomethylenemalonate having such behavior. This result opens a door to a virtual study considering a higher variability of the structural core already evaluated, as well as of other chemicals not included in this study. We conclude that the approach described here seems to be a promising QSAR tool for the molecular discovery of novel classes of antimalarial drugs, which may meet the dual challenges posed by drug-resistant parasites and the rapid progression of malaria illnesses. PMID:16045304

  13. Heutagogy: An alternative practice based learning approach.

    PubMed

    Bhoyrub, John; Hurley, John; Neilson, Gavin R; Ramsay, Mike; Smith, Margaret

    2010-11-01

    Education has explored and utilised multiple approaches in attempts to enhance the learning and teaching opportunities available to adult learners. Traditional pedagogy has been both directly and indirectly affected by andragogy and transformational learning, consequently widening our understandings and approaches toward view teaching and learning. Within the context of nurse education, a major challenge has been to effectively apply these educational approaches to the complex, unpredictable and challenging environment of practice based learning. While not offered as a panacea to such challenges, heutagogy is offered in this discussion paper as an emerging and potentially highly congruent educational framework to place around practice based learning. Being an emergent theory its known conceptual underpinnings and possible applications to nurse education need to be explored and theoretically applied. Through placing the adult learner at the foreground of grasping learning opportunities as they unpredictability emerge from a sometimes chaotic environment, heutagogy can be argued as offering the potential to minimise many of the well published difficulties of coordinating practice with faculty teaching and learning. PMID:20554249

  14. Fluctuation complexity of agent-based financial time series model by stochastic Potts system

    NASA Astrophysics Data System (ADS)

    Hong, Weijia; Wang, Jun

    2015-03-01

    Financial market is a complex evolved dynamic system with high volatilities and noises, and the modeling and analyzing of financial time series are regarded as the rather challenging tasks in financial research. In this work, by applying the Potts dynamic system, a random agent-based financial time series model is developed in an attempt to uncover the empirical laws in finance, where the Potts model is introduced to imitate the trading interactions among the investing agents. Based on the computer simulation in conjunction with the statistical analysis and the nonlinear analysis, we present numerical research to investigate the fluctuation behaviors of the proposed time series model. Furthermore, in order to get a robust conclusion, we consider the daily returns of Shanghai Composite Index and Shenzhen Component Index, and the comparison analysis of return behaviors between the simulation data and the actual data is exhibited.

  15. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    PubMed Central

    Hashim, Rathiah; Noor Elaiza, Abd Khalid; Irtaza, Aun

    2014-01-01

    One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations. PMID:25121136

  16. A Stochastic Individual-Based Model of the Progression of Atrial Fibrillation in Individuals and Populations

    PubMed Central

    Galla, Tobias; Clayton, Richard H.

    2016-01-01

    Models that represent the mechanisms that initiate and sustain atrial fibrillation (AF) in the heart are computationally expensive to simulate and therefore only capture short time scales of a few heart beats. It is therefore difficult to embed biophysical mechanisms into both policy-level disease models, which consider populations of patients over multiple decades, and guidelines that recommend treatment strategies for patients. The aim of this study is to link these modelling paradigms using a stylised population-level model that both represents AF progression over a long time-scale and retains a description of biophysical mechanisms. We develop a non-Markovian binary switching model incorporating three different aspects of AF progression: genetic disposition, disease/age related remodelling, and AF-related remodelling. This approach allows us to simulate individual AF episodes as well as the natural progression of AF in patients over a period of decades. Model parameters are derived, where possible, from the literature, and the model development has highlighted a need for quantitative data that describe the progression of AF in population of patients. The model produces time series data of AF episodes over the lifetimes of simulated patients. These are analysed to quantitatively describe progression of AF in terms of several underlying parameters. Overall, the model has potential to link mechanisms of AF to progression, and to be used as a tool to study clinical markers of AF or as training data for AF classification algorithms. PMID:27070920

  17. Stochastic switching of TiO2-based memristive devices with identical initial memory states

    PubMed Central

    2014-01-01

    In this work, we show that identical TiO2-based memristive devices that possess the same initial resistive states are only phenomenologically similar as their internal structures may vary significantly, which could render quite dissimilar switching dynamics. We experimentally demonstrated that the resistive switching of practical devices with similar initial states could occur at different programming stimuli cycles. We argue that similar memory states can be transcribed via numerous distinct active core states through the dissimilar reduced TiO2-x filamentary distributions. Our hypothesis was finally verified via simulated results of the memory state evolution, by taking into account dissimilar initial filamentary distribution. PMID:24994953

  18. Live imaging-based model selection reveals periodic regulation of the stochastic G1/S phase transition in vertebrate axial development.

    PubMed

    Sugiyama, Mayu; Saitou, Takashi; Kurokawa, Hiroshi; Sakaue-Sawano, Asako; Imamura, Takeshi; Miyawaki, Atsushi; Iimura, Tadahiro

    2014-12-01

    In multicellular organism development, a stochastic cellular response is observed, even when a population of cells is exposed to the same environmental conditions. Retrieving the spatiotemporal regulatory mode hidden in the heterogeneous cellular behavior is a challenging task. The G1/S transition observed in cell cycle progression is a highly stochastic process. By taking advantage of a fluorescence cell cycle indicator, Fucci technology, we aimed to unveil a hidden regulatory mode of cell cycle progression in developing zebrafish. Fluorescence live imaging of Cecyil, a zebrafish line genetically expressing Fucci, demonstrated that newly formed notochordal cells from the posterior tip of the embryonic mesoderm exhibited the red (G1) fluorescence signal in the developing notochord. Prior to their initial vacuolation, these cells showed a fluorescence color switch from red to green, indicating G1/S transitions. This G1/S transition did not occur in a synchronous manner, but rather exhibited a stochastic process, since a mixed population of red and green cells was always inserted between newly formed red (G1) notochordal cells and vacuolating green cells. We termed this mixed population of notochordal cells, the G1/S transition window. We first performed quantitative analyses of live imaging data and a numerical estimation of the probability of the G1/S transition, which demonstrated the existence of a posteriorly traveling regulatory wave of the G1/S transition window. To obtain a better understanding of this regulatory mode, we constructed a mathematical model and performed a model selection by comparing the results obtained from the models with those from the experimental data. Our analyses demonstrated that the stochastic G1/S transition window in the notochord travels posteriorly in a periodic fashion, with doubled the periodicity of the neighboring paraxial mesoderm segmentation. This approach may have implications for the characterization of the

  19. Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data.

    PubMed

    Munoz-Organero, Mario; Lotfi, Ahmad

    2016-01-01

    Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063

  20. Integrating multiple scales of hydraulic conductivity measurements in training image-based stochastic models

    NASA Astrophysics Data System (ADS)

    Mahmud, K.; Mariethoz, G.; Baker, A.; Sharma, A.

    2015-01-01

    Hydraulic conductivity is one of the most critical and at the same time one of the most uncertain parameters in many groundwater models. One problem commonly faced is that the data are usually not collected at the same scale as the discretized elements used in a numerical model. Moreover, it is common that different types of hydraulic conductivity measurements, corresponding to different spatial scales, coexist in a studied domain, which have to be integrated simultaneously. Here we address this issue in the context of Image Quilting, one of the recently developed multiple-point geostatistics methods. Based on a training image that represents fine-scale spatial variability, we use the simplified renormalization upscaling method to obtain a series of upscaled training images that correspond to the different scales at which measurements are available. We then apply Image Quilting with such a multiscale training image to be able to incorporate simultaneously conditioning data at several spatial scales of heterogeneity. The realizations obtained satisfy the conditioning data exactly across all scales, but it can come at the expense of a small approximation in the representation of the physical scale relationships. In order to mitigate this approximation, we iteratively apply a kriging-based correction to the finest scale that ensures local conditioning at the coarsest scales. The method is tested on a series of synthetic examples where it gives good results and shows potential for the integration of different measurement methods in real-case hydrogeological models.

  1. Bayesian Gibbs Markov chain: MRF-based Stochastic Joint Inversion of Hydrological and Geophysical Datasets for Improved Characterization of Aquifer Heterogeneities.

    NASA Astrophysics Data System (ADS)

    Oware, E. K.

    2015-12-01

    Modeling aquifer heterogeneities (AH) is a complex, multidimensional problem that mostly requires stochastic imaging strategies for tractability. While the traditional Bayesian Markov chain Monte Carlo (McMC) provides a powerful framework to model AH, the generic McMC is computationally prohibitive and, thus, unappealing for large-scale problems. An innovative variant of the McMC scheme that imposes priori spatial statistical constraints on model parameter updates, for improved characterization in a computationally efficient manner is proposed. The proposed algorithm (PA) is based on Markov random field (MRF) modeling, which is an image processing technique that infers the global behavior of a random field from its local properties, making the MRF approach well suited for imaging AH. MRF-based modeling leverages the equivalence of Gibbs (or Boltzmann) distribution (GD) and MRF to identify the local properties of an MRF in terms of the easily quantifiable Gibbs energy. The PA employs the two-step approach to model the lithological structure of the aquifer and the hydraulic properties within the identified lithologies simultaneously. It performs local Gibbs energy minimizations along a random path, which requires parameters of the GD (spatial statistics) to be specified. A PA that implicitly infers site-specific GD parameters within a Bayesian framework is also presented. The PA is illustrated with a synthetic binary facies aquifer with a lognormal heterogeneity simulated within each facies. GD parameters of 2.6, 1.2, -0.4, and -0.2 were estimated for the horizontal, vertical, NESW, and NWSE directions, respectively. Most of the high hydraulic conductivity zones (facies 2) were fairly resolved (see results below) with facies identification accuracy rate of 81%, 89%, and 90% for the inversions conditioned on concentration (R1), resistivity (R2), and joint (R3), respectively. The incorporation of the conditioning datasets improved on the root mean square error (RMSE

  2. Stochastic analysis of transport in tubes with rough walls

    SciTech Connect

    Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu

    2006-09-01

    Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.

  3. Turing patterns and a stochastic individual-based model for predator-prey systems

    NASA Astrophysics Data System (ADS)

    Nagano, Seido

    2012-02-01

    Reaction-diffusion theory has played a very important role in the study of pattern formations in biology. However, a group of individuals is described by a single state variable representing population density in reaction-diffusion models and interaction between individuals can be included only phenomenologically. Recently, we have seamlessly combined individual-based models with elements of reaction-diffusion theory. To include animal migration in the scheme, we have adopted a relationship between the diffusion and the random numbers generated according to a two-dimensional bivariate normal distribution. Thus, we have observed the transition of population patterns from an extinction mode, a stable mode, or an oscillatory mode to the chaotic mode as the population growth rate increases. We show our phase diagram of predator-prey systems and discuss the microscopic mechanism for the stable lattice formation in detail.

  4. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  5. An approach to rescheduling activities based on determination of priority and disruptivity

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.; Johnston, Mark D.

    1990-01-01

    A constraint-based scheduling system called SPIKE is being used to create long term schedules for the Hubble Space Telescope. Feedback for the spacecraft or from other ground support systems may invalidate some scheduling decisions and those activities concerned must be reconsidered. A function rescheduling priority is defined which for a given activity performs a heuristic analysis and produces a relative numerical value which is used to rank all such entities in the order that they should be rescheduled. A function disruptivity is also defined that is used to place a relative numeric value on how much a pre-existing schedule would be changed in order to reschedule an activity. Using these functions, two algorithms (a stochastic neural network approach and an exhaustive search approach) are proposed to find the best place to reschedule an activity. Prototypes were implemented and preliminary testing reveals that the exhaustive technique produces only marginally better results at much greater computational cost.

  6. A Kalman-Filter-Based Approach to Combining Independent Earth-Orientation Series

    NASA Technical Reports Server (NTRS)

    Gross, Richard S.; Eubanks, T. M.; Steppe, J. A.; Freedman, A. P.; Dickey, J. O.; Runge, T. F.

    1998-01-01

    An approach. based upon the use of a Kalman filter. that is currently employed at the Jet Propulsion Laboratory (JPL) for combining independent measurements of the Earth's orientation, is presented. Since changes in the Earth's orientation can be described is a randomly excited stochastic process, the uncertainty in our knowledge of the Earth's orientation grows rapidly in the absence of measurements. The Kalman-filter methodology allows for an objective accounting of this uncertainty growth, thereby facilitating the intercomparison of measurements taken at different epochs (not necessarily uniformly spaced in time) and with different precision. As an example of this approach to combining Earth-orientation series, a description is given of a combination, SPACE95, that has been generated recently at JPL.

  7. A Bayesian approach for the stochastic modeling error reduction of magnetic material identification of an electromagnetic device

    NASA Astrophysics Data System (ADS)

    Abdallh, A.; Crevecoeur, G.; Dupré, L.

    2012-03-01

    Magnetic material properties of an electromagnetic device can be recovered by solving an inverse problem where measurements are adequately interpreted by a mathematical forward model. The accuracy of these forward models dramatically affects the accuracy of the material properties recovered by the inverse problem. The more accurate the forward model is, the more accurate recovered data are. However, the more accurate ‘fine’ models demand a high computational time and memory storage. Alternatively, less accurate ‘coarse’ models can be used with a demerit of the high expected recovery errors. This paper uses the Bayesian approximation error approach for improving the inverse problem results when coarse models are utilized. The proposed approach adapts the objective function to be minimized with the a priori misfit between fine and coarse forward model responses. In this paper, two different electromagnetic devices, namely a switched reluctance motor and an EI core inductor, are used as case studies. The proposed methodology is validated on both purely numerical and real experimental results. The results show a significant reduction in the recovery error within an acceptable computational time.

  8. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  9. Stochastic Optimally Tuned Range-Separated Hybrid Density Functional Theory.

    PubMed

    Neuhauser, Daniel; Rabani, Eran; Cytter, Yael; Baer, Roi

    2016-05-19

    We develop a stochastic formulation of the optimally tuned range-separated hybrid density functional theory that enables significant reduction of the computational effort and scaling of the nonlocal exchange operator at the price of introducing a controllable statistical error. Our method is based on stochastic representations of the Coulomb convolution integral and of the generalized Kohn-Sham density matrix. The computational cost of the approach is similar to that of usual Kohn-Sham density functional theory, yet it provides a much more accurate description of the quasiparticle energies for the frontier orbitals. This is illustrated for a series of silicon nanocrystals up to sizes exceeding 3000 electrons. Comparison with the stochastic GW many-body perturbation technique indicates excellent agreement for the fundamental band gap energies, good agreement for the band edge quasiparticle excitations, and very low statistical errors in the total energy for large systems. The present approach has a major advantage over one-shot GW by providing a self-consistent Hamiltonian that is central for additional postprocessing, for example, in the stochastic Bethe-Salpeter approach. PMID:26651840

  10. Lunar base CELSS: A bioregenerative approach

    NASA Technical Reports Server (NTRS)

    Easterwood, G. W.; Street, J. J.; Sartain, J. B.; Hubbell, D. H.; Robitaille, H. A.

    1992-01-01

    During the twenty-first century, human habitation of a self-sustaining lunar base could become a reality. To achieve this goal, the occupants will have to have food, water, and an adequate atmosphere within a carefully designed environment. Advanced technology will be employed to support terrestrial life-sustaining processes on the Moon. One approach to a life support system based on food production, waste management and utilization, and product synthesis is outlined. Inputs include an atmosphere, water, plants, biodegradable substrates, and manufacutured materials such as fiberglass containment vessels from lunar resources. Outputs include purification of air and water, food, and hydrogen (H2) generated from methane (CH4). Important criteria are as follows: (1) minimize resupply from Earth; and (2) recycle as efficiently as possible.

  11. An Ontology Based Approach to Information Security

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Santos, Henrique

    The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard ISO/IEC_JTC1, and their relationships to other concepts, defined in a subset of the information security domain.

  12. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  13. Optimization of observation plan based on the stochastic characteristics of the geodetic network

    NASA Astrophysics Data System (ADS)

    Pachelski, Wojciech; Postek, Paweł

    2016-06-01

    Optimal design of geodetic network is a basic subject of many engineering projects. An observation plan is a concluding part of the process. Any particular observation within the network has through adjustment a different contribution and impact on values and accuracy characteristics of unknowns. The problem of optimal design can be solved by means of computer simulation. This paper presents a new method of simulation based on sequential estimation of individual observations in a step-by-step manner, by means of the so-called filtering equations. The algorithm aims at satisfying different criteria of accuracy according to various interpretations of the covariance matrix. Apart of them, the optimization criterion is also amount of effort, defined as the minimum number of observations required. A numerical example of a 2-D network is illustrated to view the effectiveness of presented method. The results show decrease of the number of observations by 66% with respect to the not optimized observation plan, which still satisfy the assumed accuracy.

  14. A stochastic model for tropical cyclone tracks based on Reanalysis data and GCM output

    NASA Astrophysics Data System (ADS)

    Ito, K.; Nakano, S.; Ueno, G.

    2014-12-01

    In the present study, we try to express probability distribution of tropical cyclone (TC) trajectories estimated on the basis of GCM output. The TC tracks are mainly controlled by the atmospheric circulation such as the trade winds and the Westerlies as well as are influenced to move northward by the Beta effect. The TC tracks, which calculated with trajectory analysis, would thus correspond to the movement of TCs due to the atmospheric circulation. Comparing the result of the trajectory analysis from reanalysis data with the Best Track (BT) of TC in the present climate, the structure of the trajectory seems to be similar to the BT. However, here is a significant problem for the calculation of a trajectory in the reanalysis wind field because there are many rotation elements including TCs in the reanalysis data. We assume that a TC would move along the steering current and the rotations would not have a great influence on the direction of moving. We are designing a state-space model based on the trajectory analysis and put an adjustment parameter for the moving vector. Here, a simple track generation model is developed. This model has a possibility to gain the probability distributions of calculated TC tracks by fitting to the BT using data assimilation. This work was conducted under the framework of the "Development of Basic Technology for Risk Information on Climate Change" supported by the SOUSEI Program of the Ministry of Education, Culture, Sports, Science, and Technology.

  15. Comparison of deterministic and stochastic approaches for isotopic concentration and decay heat uncertainty quantification on elementary fission pulse

    NASA Astrophysics Data System (ADS)

    Lahaye, S.; Huynh, T. D.; Tsilanizara, A.

    2016-03-01

    Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.

  16. Synthetic aperture elastography: a GPU based approach

    NASA Astrophysics Data System (ADS)

    Verma, Prashant; Doyley, Marvin M.

    2014-03-01

    Synthetic aperture (SA) ultrasound imaging system produces highly accurate axial and lateral displacement estimates; however, low frame rates and large data volumes can hamper its clinical use. This paper describes a real-time SA imaging based ultrasound elastography system that we have recently developed to overcome this limitation. In this system, we implemented both beamforming and 2D cross-correlation echo tracking on Nvidia GTX 480 graphics processing unit (GPU). We used one thread per pixel for beamforming; whereas, one block per pixel was used for echo tracking. We compared the quality of elastograms computed with our real-time system relative to those computed using our standard single threaded elastographic imaging methodology. In all studies, we used conventional measures of image quality such as elastographic signal to noise ratio (SNRe). Specifically, SNRe of axial and lateral strain elastograms computed with real-time system were 36 dB and 23 dB, respectively, which was numerically equal to those computed with our standard approach. We achieved a frame rate of 6 frames per second using our GPU based approach for 16 transmits and kernel size of 60 × 60 pixels, which is 400 times faster than that achieved using our standard protocol.

  17. A stochastic model for immunotherapy of cancer

    PubMed Central

    Baar, Martina; Coquille, Loren; Mayer, Hannah; Hölzel, Michael; Rogava, Meri; Tüting, Thomas; Bovier, Anton

    2016-01-01

    We propose an extension of a standard stochastic individual-based model in population dynamics which broadens the range of biological applications. Our primary motivation is modelling of immunotherapy of malignant tumours. In this context the different actors, T-cells, cytokines or cancer cells, are modelled as single particles (individuals) in the stochastic system. The main expansions of the model are distinguishing cancer cells by phenotype and genotype, including environment-dependent phenotypic plasticity that does not affect the genotype, taking into account the effects of therapy and introducing a competition term which lowers the reproduction rate of an individual in addition to the usual term that increases its death rate. We illustrate the new setup by using it to model various phenomena arising in immunotherapy. Our aim is twofold: on the one hand, we show that the interplay of genetic mutations and phenotypic switches on different timescales as well as the occurrence of metastability phenomena raise new mathematical challenges. On the other hand, we argue why understanding purely stochastic events (which cannot be obtained with deterministic models) may help to understand the resistance of tumours to therapeutic approaches and may have non-trivial consequences on tumour treatment protocols. This is supported through numerical simulations. PMID:27063839

  18. Data assimilative twin-experiment in a high-resolution Bay of Biscay configuration: 4DEnOI based on stochastic modeling of the wind forcing

    NASA Astrophysics Data System (ADS)

    Vervatis, V.; Testut, C. E.; De Mey, P.; Ayoub, N.; Chanut, J.; Quattrocchi, G.

    2016-04-01

    A twin-experiment is carried out introducing elements of an Ensemble Kalman Filter (EnKF), to assess and correct ocean uncertainties in a high-resolution Bay of Biscay configuration. Initially, an ensemble of 102 members is performed by applying stochastic modeling of the wind forcing. The target of this step is to simulate the envelope of possible realizations and to explore the robustness of the method at building ensemble covariances. Our second step includes the integration of the ensemble-based error estimates into a data assimilative system adopting a 4D Ensemble Optimal Interpolation (4DEnOI) approach. In the twin-experiment context, synthetic observations are simulated from a perturbed member not used in the subsequent analyses, satisfying the condition of an unbiased probability distribution function against the ensemble by performing a rank histogram. We evaluate the assimilation performance on short-term predictability focusing on the ensemble size, the observational network, and the enrichment of the ensemble by inexpensive time-lagged techniques. The results show that variations in performance are linked to intrinsic oceanic processes, such as the spring shoaling of the thermocline, in combination with external forcing modulated by river runoffs and time-variable wind patterns, constantly reshaping the error regimes. Ensemble covariances are able to capture high-frequency processes associated with coastal density fronts, slope currents and upwelling events near the Armorican and Galician shelf break. Further improvement is gained when enriching model covariances by including pattern phase errors, with the help of time-neighbor states augmenting the ensemble spread.

  19. Stochastic Evolution Dynamic of the Rock–Scissors–Paper Game Based on a Quasi Birth and Death Process

    PubMed Central

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-01-01

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor. PMID:27346701

  20. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process.

    PubMed

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-01-01

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor. PMID:27346701