Science.gov

Sample records for stochastic approach based

  1. Stochastic Turing patterns: analysis of compartment-based approaches.

    PubMed

    Cao, Yang; Erban, Radek

    2014-12-01

    Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.

  2. Stochastic Functional Data Analysis: A Diffusion Model-based Approach

    PubMed Central

    Zhu, Bin; Song, Peter X.-K.; Taylor, Jeremy M.G.

    2011-01-01

    Summary This paper presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov Chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate specific antigen data, where we also show how the models can be used for forecasting. PMID:21418053

  3. Detecting Abnormal Vehicular Dynamics at Intersections Based on an Unsupervised Learning Approach and a Stochastic Model

    PubMed Central

    Jiménez-Hernández, Hugo; González-Barbosa, Jose-Joel; Garcia-Ramírez, Teresa

    2010-01-01

    This investigation demonstrates an unsupervised approach for modeling traffic flow and detecting abnormal vehicle behaviors at intersections. In the first stage, the approach reveals and records the different states of the system. These states are the result of coding and grouping the historical motion of vehicles as long binary strings. In the second stage, using sequences of the recorded states, a stochastic graph model based on a Markovian approach is built. A behavior is labeled abnormal when current motion pattern cannot be recognized as any state of the system or a particular sequence of states cannot be parsed with the stochastic model. The approach is tested with several sequences of images acquired from a vehicular intersection where the traffic flow and duration used in connection with the traffic lights are continuously changed throughout the day. Finally, the low complexity and the flexibility of the approach make it reliable for use in real time systems. PMID:22163616

  4. Data-based stochastic subgrid-scale parametrization: an approach using cluster-weighted modelling.

    PubMed

    Kwasniok, Frank

    2012-03-13

    A new approach for data-based stochastic parametrization of unresolved scales and processes in numerical weather and climate prediction models is introduced. The subgrid-scale model is conditional on the state of the resolved scales, consisting of a collection of local models. A clustering algorithm in the space of the resolved variables is combined with statistical modelling of the impact of the unresolved variables. The clusters and the parameters of the associated subgrid models are estimated simultaneously from data. The method is implemented and explored in the framework of the Lorenz '96 model using discrete Markov processes as local statistical models. Performance of the cluster-weighted Markov chain scheme is investigated for long-term simulations as well as ensemble prediction. It clearly outperforms simple parametrization schemes and compares favourably with another recently proposed subgrid modelling scheme also based on conditional Markov chains.

  5. An approach to the drone fleet survivability assessment based on a stochastic continues-time model

    NASA Astrophysics Data System (ADS)

    Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos

    2017-09-01

    An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.

  6. Stochastic simulation of biochemical reactions with partial-propensity and rejection-based approaches.

    PubMed

    Thanh, Vo Hong

    2017-10-01

    We present in this paper a new exact algorithm for improving performance of exact stochastic simulation algorithm. The algorithm is developed on concepts of the partial-propensity and the rejection-based approaches. It factorizes the propensity bounds of reactions and groups factors by common reactant species for selecting next reaction firings. Our algorithm provides favorable computational advantages for simulating of biochemical reaction networks by reducing the cost for selecting the next reaction firing to scale with the number of chemical species and avoiding expensive propensity updates during the simulation. We present the details of our new algorithm and benchmark it on concrete biological models to demonstrate its applicability and efficiency. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Spacecraft base-sine vibration test data uncertainties investigation based on stochastic scatter approach

    NASA Astrophysics Data System (ADS)

    Laborde, S.; Calvi, A.

    2012-10-01

    This article describes some results of the study "DYNAMITED". The study is funded by the European Space Agency (ESA) and performed by a consortium of European industries and university, led by EADS Astrium Satellites. One of the main objectives of the study is to assess and quantify the uncertainty in the spacecraft sine vibration test data. For a number of reasons as for example robustness and confidence in the notching of the input spectra and validation of the finite element model, it is important to study the effect of the sources of uncertainty on the test data including the frequency response functions and the modal parameters. In particular the paper provides an overview on the estimation of the scatter on the spacecraft dynamic response due to identified sources of test uncertainties and the calculation of a "notched" sine test input spectrum based on a stochastic methodology. By means of Monte Carlo simulation, a stochastic cloud of the output of interest can be generated and this provides an estimate of the global error on the test results. The cloud is generated by characterizing the assumed sources of test uncertainties by parameters of the structure finite element model and by quantifying the scatter of the parameters. The uncertain parameters are the input random variables of the Monte Carlo simulation. Some results on the application of the methods to telecom spacecraft sine vibration tests are illustrated.

  8. Channel-based Langevin approach for the stochastic Hodgkin-Huxley neuron

    NASA Astrophysics Data System (ADS)

    Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei

    2013-01-01

    Stochasticity in ion channel gating is the major source of intrinsic neuronal noise, which can induce many important effects in neuronal dynamics. Several numerical implementations of the Langevin approach have been proposed to approximate the Markovian dynamics of the Hodgkin-Huxley neuronal model. In this work an improved channel-based Langevin approach is proposed by introducing a truncation procedure to limit the state fractions in the range of [0, 1]. The truncated fractions are put back into the state fractions in the next time step for channel noise calculation. Our simulations show that the bounded Langevin approaches combined with the restored process give better approximations to the statistics of action potentials with the Markovian method. As a result, in our approach the channel state fractions are disturbed by two terms of noise: an uncorrelated Gaussian noise and a time-correlated noise obtained from the truncated fractions. We suggest that the restoration of truncated fractions is a critical process for a bounded Langevin method.

  9. Channel-based Langevin approach for the stochastic Hodgkin-Huxley neuron.

    PubMed

    Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei

    2013-01-01

    Stochasticity in ion channel gating is the major source of intrinsic neuronal noise, which can induce many important effects in neuronal dynamics. Several numerical implementations of the Langevin approach have been proposed to approximate the Markovian dynamics of the Hodgkin-Huxley neuronal model. In this work an improved channel-based Langevin approach is proposed by introducing a truncation procedure to limit the state fractions in the range of [0, 1]. The truncated fractions are put back into the state fractions in the next time step for channel noise calculation. Our simulations show that the bounded Langevin approaches combined with the restored process give better approximations to the statistics of action potentials with the Markovian method. As a result, in our approach the channel state fractions are disturbed by two terms of noise: an uncorrelated Gaussian noise and a time-correlated noise obtained from the truncated fractions. We suggest that the restoration of truncated fractions is a critical process for a bounded Langevin method.

  10. A stochastic based approach for a new site classification method: application to the Algerian seismic code

    NASA Astrophysics Data System (ADS)

    Beneldjouzi, Mohamed; Laouami, Nasser

    2015-12-01

    Building codes have widely considered the shear wave velocity to make a reliable subsoil seismic classification, based on the knowledge of the mechanical properties of material deposits down to bedrock. This approach has limitations because geophysical data are often very expensive to obtain. Recently, other alternatives have been proposed based on measurements of background noise and estimation of the H/V amplification curve. However, the use of this technique needs a regulatory framework before it can become a realistic site classification procedure. This paper proposes a new formulation for characterizing design sites in accordance with the Algerian seismic building code (RPA99/ver.2003), through transfer functions, by following a stochastic approach combined to a statistical study. For each soil type, the deterministic calculation of the average transfer function is performed over a wide sample of 1-D soil profiles, where the average shear wave (S-W) velocity, V s, in soil layers is simulated using random field theory. Average transfer functions are also used to calculate average site factors and normalized acceleration response spectra to highlight the amplification potential of each site type, since frequency content of the transfer function is significantly similar to that of the H/V amplification curve. Comparison is done with the RPA99/ver.2003 and Eurocode8 (EC8) design response spectra, respectively. In the absence of geophysical data, the proposed classification approach together with micro-tremor measures can be used toward a better soil classification.

  11. Combining Particle Filters and Consistency-Based Approaches for Monitoring and Diagnosis of Stochastic Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel

    2004-01-01

    Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.

  12. Holistic irrigation water management approach based on stochastic soil water dynamics

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly

  13. Sensitivity of Base-Isolated Systems to Ground Motion Characteristics: A Stochastic Approach

    SciTech Connect

    Kaya, Yavuz; Safak, Erdal

    2008-07-08

    Base isolators dissipate energy through their nonlinear behavior when subjected to earthquake-induced loads. A widely used base isolation system for structures involves installing lead-rubber bearings (LRB) at the foundation level. The force-deformation behavior of LRB isolators can be modeled by a bilinear hysteretic model. This paper investigates the effects of ground motion characteristics on the response of bilinear hysteretic oscillators by using a stochastic approach. Ground shaking is characterized by its power spectral density function (PSDF), which includes corner frequency, seismic moment, moment magnitude, and site effects as its parameters. The PSDF of the oscillator response is calculated by using the equivalent-linearization techniques of random vibration theory for hysteretic nonlinear systems. Knowing the PSDF of the response, we can calculate the mean square and the expected maximum response spectra for a range of natural periods and ductility values. The results show that moment magnitude is a critical factor determining the response. Site effects do not seem to have a significant influence.

  14. Stochastic approach to equilibrium and nonequilibrium thermodynamics.

    PubMed

    Tomé, Tânia; de Oliveira, Mário J

    2015-04-01

    We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.

  15. Stochastic Modeling based on Dictionary Approach for the Generation of Daily Precipitation Occurrences

    NASA Astrophysics Data System (ADS)

    Panu, U. S.; Ng, W.; Rasmussen, P. F.

    2009-12-01

    The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in

  16. A risk-based interactive multi-stage stochastic programming approach for water resources planning under dual uncertainties

    NASA Astrophysics Data System (ADS)

    Wang, Y. Y.; Huang, G. H.; Wang, S.; Li, W.; Guan, P. B.

    2016-08-01

    In this study, a risk-based interactive multi-stage stochastic programming (RIMSP) approach is proposed through incorporating the fractile criterion method and chance-constrained programming within a multi-stage decision-making framework. RIMSP is able to deal with dual uncertainties expressed as random boundary intervals that exist in the objective function and constraints. Moreover, RIMSP is capable of reflecting dynamics of uncertainties, as well as the trade-off between the total net benefit and the associated risk. A water allocation problem is used to illustrate applicability of the proposed methodology. A set of decision alternatives with different combinations of risk levels applied to the objective function and constraints can be generated for planning the water resources allocation system. The results can help decision makers examine potential interactions between risks related to the stochastic objective function and constraints. Furthermore, a number of solutions can be obtained under different water policy scenarios, which are useful for decision makers to formulate an appropriate policy under uncertainty. The performance of RIMSP is analyzed and compared with an inexact multi-stage stochastic programming (IMSP) method. Results of comparison experiment indicate that RIMSP is able to provide more robust water management alternatives with less system risks in comparison with IMSP.

  17. Stochastic sampling for deterministic structural topology optimization with many load cases: Density-based and ground structure approaches

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojia Shelly; de Sturler, Eric; Paulino, Glaucio H.

    2017-10-01

    We propose an efficient probabilistic method to solve a deterministic problem -- we present a randomized optimization approach that drastically reduces the enormous computational cost of optimizing designs under many load cases for both continuum and truss topology optimization. Practical structural designs by topology optimization typically involve many load cases, possibly hundreds or more. The optimal design minimizes a, possibly weighted, average of the compliance under each load case (or some other objective). This means that in each optimization step a large finite element problem must be solved for each load case, leading to an enormous computational effort. On the contrary, the proposed randomized optimization method with stochastic sampling requires the solution of only a few (e.g., 5 or 6) finite element problems (large linear systems) per optimization step. Based on simulated annealing, we introduce a damping scheme for the randomized approach. Through numerical examples in two and three dimensions, we demonstrate that the stochastic algorithm drastically reduces computational cost to obtain similar final topologies and results (e.g., compliance) compared with the standard algorithms. The results indicate that the damping scheme is effective and leads to rapid convergence of the proposed algorithm.

  18. Extended local equilibrium approach to stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    De Decker, Y.; Garcia Cantú Ros, A.; Nicolis, G.

    2015-07-01

    A new approach to stochastic thermodynamics is developed, in which the local equilibrium hypothesis is extended to incorporate the effect of fluctuations. A fluctuating entropy in the form of a random functional of the fluctuating state variables is introduced, whose balance equation allows to identify the stochastic entropy flux and stochastic entropy production. The statistical properties of these quantities are analyzed and illustrated on representative examples.

  19. Simulation of earthquake ground motions in the eastern U.S. using deterministic physics-based and stochastic approaches

    USGS Publications Warehouse

    Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos

    2015-01-01

    Earthquake ground motion recordings are scarce in the central and eastern U.S. (CEUS) for large magnitude events and at close distances. We use two different simulation approaches, a deterministic physics-based model and a stochastic model, to simulate recordings from the 2011 Mineral, Virginia, 5.8 earthquake in the CEUS. We then use the 2001 Bhuj, India, 7.6 earthquake as a tectonic analog for a large CEUS earthquake and modify our simulations to develop models for generation of large magnitude earthquakes in the CEUS. Both models show a good fit to the observations from 0.1 to 10 Hz, and show a faster fall-off with distances beyond 500 km for the acceleration spectra compared to ground motion prediction models (GMPEs) for a 7.6 event.

  20. Approach to Equilibrium for the Stochastic NLS

    NASA Astrophysics Data System (ADS)

    Lebowitz, J. L.; Mounaix, Ph.; Wang, W.-M.

    2013-07-01

    We study the approach to equilibrium, described by a Gibbs measure, for a system on a d-dimensional torus evolving according to a stochastic nonlinear Schrödinger equation (SNLS) with a high frequency truncation. We prove exponential approach to the truncated Gibbs measure both for the focusing and defocusing cases when the dynamics is constrained via suitable boundary conditions to regions of the Fourier space where the Hamiltonian is convex. Our method is based on establishing a spectral gap for the non self-adjoint Fokker-Planck operator governing the time evolution of the measure, which is uniform in the frequency truncation N. The limit N →∞ is discussed.

  1. Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system.

    PubMed

    Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng

    2016-03-04

    Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction.

  2. Channel based generating function approach to the stochastic Hodgkin-Huxley neuronal system

    NASA Astrophysics Data System (ADS)

    Ling, Anqi; Huang, Yandong; Shuai, Jianwei; Lan, Yueheng

    2016-03-01

    Internal and external fluctuations, such as channel noise and synaptic noise, contribute to the generation of spontaneous action potentials in neurons. Many different Langevin approaches have been proposed to speed up the computation but with waning accuracy especially at small channel numbers. We apply a generating function approach to the master equation for the ion channel dynamics and further propose two accelerating algorithms, with an accuracy close to the Gillespie algorithm but with much higher efficiency, opening the door for expedited simulation of noisy action potential propagating along axons or other types of noisy signal transduction.

  3. A stochastic simulation-optimization approach for estimating highly reliable soil tension threshold values in sensor-based deficit irrigation

    NASA Astrophysics Data System (ADS)

    Kloss, S.; Schütze, N.; Walser, S.; Grundmann, J.

    2012-04-01

    In arid and semi-arid regions where water is scarce, farmers heavily rely on irrigation in order to grow crops and to produce agricultural commodities. The variable and often severely limited water supply thereby poses a serious challenge for farmers to cope with and demand sophisticated irrigation strategies that allow an efficient management of the available water resources. The general aim is to increase water productivity (WP) and one of these strategies to achieve this goal is controlled deficit irrigation (CDI). One way to realize CDI is by defining soil water status specific threshold values (either in soil tension or moisture) at which irrigation cycles are triggered. When utilizing CDI, irrigation control is of utmost importance and yet thresholds are likely chosen by trial and error and thus unreliable. Hence, for CDI to be effective systematic investigations for deriving reliable threshold values that account for different CDI strategies are needed. In this contribution, a method is presented that uses a simulation-based stochastic approach for estimating threshold values with a high reliability. The approach consist of a weather generator offering statistical significance to site-specific climate series, an optimization algorithm that determines optimal threshold values under limiting waters supply, and a crop model for simulating plant growth and water consumption. The study focuses on threshold values of soil tension for different CDI strategies. The advantage of soil-tension-based threshold values over soil-moisture-based lies in their universal and soil type independent applicability. The investigated CDI strategies comprised schedules of constant threshold values, crop development stage dependent threshold values, and different minimum irrigation intervals. For practical reasons, fixed irrigation schedules were tested as well. Additionally, a full irrigation schedule served as reference. The obtained threshold values were then tested in field

  4. Master-equation approach to stochastic neurodynamics

    NASA Astrophysics Data System (ADS)

    Ohira, Toru; Cowan, Jack D.

    1993-09-01

    A master-equation approach to the stochastic neurodynamics proposed by Cowan [in Advances in Neural Information Processing Systems 3, edited by R. P. Lippman, J. E. Moody, and D. S. Touretzky (Morgan Kaufmann, San Mateo, 1991), p. 62] is investigated in this paper. We deal with a model neural network that is composed of two-state neurons obeying elementary stochastic transition rates. We show that such an approach yields concise expressions for multipoint moments and an equation of motion. We apply the formalism to a (1+1)-dimensional system. Exact and approximate expressions for various statistical parameters are obtained and compared with Monte Carlo simulations.

  5. Stochastic approach based salient moving object detection using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Tang, Peng; Liu, Zhifang; Gao, Lin; Sheng, Peng

    2007-11-01

    Background modeling techniques are important for object detection and tracking in video surveillances. Traditional background subtraction approaches are suffered from problems, such as persistent dynamic backgrounds, quick illumination changes, occlusions, noise etc. In this paper, we address the problem of detection and localization of moving objects in a video stream without apperception of background statistics. Three major contributions are presented. First, introducing the sequential Monte Carlo sampling techniques greatly reduce the computation complexity while compromise the expected accuracy. Second, the robust salient motion is considered when resampling the feature points by removing those who do not move in a relative constant velocity and emphasis those in consistent motion. Finally, the proposed joint feature model enforced spatial consistency. Promising results demonstrate the potentials of the proposed algorithm.

  6. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography

    PubMed Central

    2013-01-01

    Background The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. Methods In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young’s modulus are computed simultaneously through an extended Kalman filter (EKF). Results In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young’s modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. Conclusions The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model

  7. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography.

    PubMed

    Lu, Minhua; Zhang, Heye; Wang, Jun; Yuan, Jinwei; Hu, Zhenghui; Liu, Huafeng

    2013-08-10

    The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young's modulus are computed simultaneously through an extended Kalman filter (EKF). In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young's modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing

  8. Anti-synchronization for stochastic memristor-based neural networks with non-modeled dynamics via adaptive control approach

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Xiao, Jinghua; Yang, Yixian

    2015-05-01

    In this paper, exponential anti-synchronization in mean square of an uncertain memristor-based neural network is studied. The uncertain terms include non-modeled dynamics with boundary and stochastic perturbations. Based on the differential inclusions theory, linear matrix inequalities, Gronwall's inequality and adaptive control technique, an adaptive controller with update laws is developed to realize the exponential anti-synchronization. Adaptive controller can adjust itself behavior to get the best performance, according to the environment is changing or the environment has changed, which has the ability to adapt to environmental change. Furthermore, a numerical example is provided to validate the effectiveness of the proposed method.

  9. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  11. Stochastic approach to irreversible thermodynamics

    NASA Astrophysics Data System (ADS)

    Nicolis, Grégoire; De Decker, Yannick

    2017-10-01

    An extension of classical irreversible thermodynamics pioneered by Ilya Prigogine is developed, in which fluctuations of macroscopic observables accounting for microscopic-scale processes are incorporated. The contribution of the fluctuations to the entropy production is derived from a generalized entropy balance equation and expressed in terms of the fluctuating variables, via an extended local equilibrium Ansatz and in terms of the probability distributions of these variables. The approach is illustrated on reactive systems involving linear and nonlinear steps, and the role of the distance from equilibrium and of the nonlinearities is assessed.

  12. Municipal solid waste management planning for Xiamen City, China: a stochastic fractional inventory-theory-based approach.

    PubMed

    Chen, Xiujuan; Huang, Guohe; Zhao, Shan; Cheng, Guanhui; Wu, Yinghui; Zhu, Hua

    2017-09-09

    In this study, a stochastic fractional inventory-theory-based waste management planning (SFIWP) model was developed and applied for supporting long-term planning of the municipal solid waste (MSW) management in Xiamen City, the special economic zone of Fujian Province, China. In the SFIWP model, the techniques of inventory model, stochastic linear fractional programming, and mixed-integer linear programming were integrated in a framework. Issues of waste inventory in MSW management system were solved, and the system efficiency was maximized through considering maximum net-diverted wastes under various constraint-violation risks. Decision alternatives for waste allocation and capacity expansion were also provided for MSW management planning in Xiamen. The obtained results showed that about 4.24 × 10(6) t of waste would be diverted from landfills when p i is 0.01, which accounted for 93% of waste in Xiamen City, and the waste diversion per unit of cost would be 26.327 × 10(3) t per $10(6). The capacities of MSW management facilities including incinerators, composting facility, and landfills would be expanded due to increasing waste generation rate.

  13. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  14. A stochastic approach to model validation

    NASA Astrophysics Data System (ADS)

    Luis, Steven J.; McLaughlin, Dennis

    This paper describes a stochastic approach for assessing the validity of environmental models. In order to illustrate basic concepts we focus on the problem of modeling moisture movement through an unsaturated porous medium. We assume that the modeling objective is to predict the mean distribution of moisture content over time and space. The mean moisture content describes the large-scale flow behavior of most interest in many practical applications. The model validation process attempts to determine whether the model's predictions are acceptably close to the mean. This can be accomplished by comparing small-scale measurements of moisture content to the model's predictions. Differences between these two quantities can be attributed to three distinct 'error sources': (1) measurement error, (2) spatial heterogeneity, and (3) model error. If we adopt appropriate stochastic descriptions for the first two sources of error we can view model validation as a hypothesis testing problem where the null hypothesis states that model error is negligible. We illustrate this concept by comparing the predictions of a simple two-dimensional deterministic model to measurements collected during a field experiment carried out near Las Cruces, New Mexico. Preliminary results from this field test indicate that a stochastic approach to validation can identify model deficiencies and provide objective standards for model performance.

  15. Permutation approach to finite-alphabet stationary stochastic processes based on the duality between values and orderings

    NASA Astrophysics Data System (ADS)

    Haruna, T.; Nakajima, K.

    2013-06-01

    The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.

  16. Time-dependent stochastic Bethe-Salpeter approach

    NASA Astrophysics Data System (ADS)

    Rabani, Eran; Baer, Roi; Neuhauser, Daniel

    2015-06-01

    A time-dependent formulation for electron-hole excitations in extended finite systems, based on the Bethe-Salpeter equation (BSE), is developed using a stochastic wave function approach. The time-dependent formulation builds on the connection between time-dependent Hartree-Fock (TDHF) theory and the configuration-interaction with single substitution (CIS) method. This results in a time-dependent Schrödinger-like equation for the quasiparticle orbital dynamics based on an effective Hamiltonian containing direct Hartree and screened exchange terms, where screening is described within the random-phase approximation (RPA). To solve for the optical-absorption spectrum, we develop a stochastic formulation in which the quasiparticle orbitals are replaced by stochastic orbitals to evaluate the direct and exchange terms in the Hamiltonian as well as the RPA screening. This leads to an overall quadratic scaling, a significant improvement over the equivalent symplectic eigenvalue representation of the BSE. Application of the time-dependent stochastic BSE (TDsBSE) approach to silicon and CdSe nanocrystals up to size of ≈3000 electrons is presented and discussed.

  17. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management.

  18. Stochastic bias in multidimensional excursion set approaches

    NASA Astrophysics Data System (ADS)

    Castorina, Emanuele; Sheth, Ravi K.

    2013-08-01

    We describe a simple fully analytic model of the excursion set approach associated with two Gaussian random walks: the first walk represents the initial overdensity around a protohalo, and the second is a crude way of allowing for other factors which might influence halo formation. This model is richer than that based on a single walk, because it yields a distribution of heights at first crossing. We provide explicit expressions for the unconditional first crossing distribution which is usually used to model the halo mass function, the progenitor distributions from which merger rates are usually estimated and the conditional distributions from which correlations with environment are usually estimated. These latter exhibit perhaps the simplest form of what is often called non-local bias, and which we prefer to call stochastic bias, since the new bias effects arise from `hidden variables' other than density, but these may still be defined locally. We provide explicit expressions for these new bias factors. We also provide formulae for the distribution of heights at first crossing in the unconditional and conditional cases. In contrast to the first crossing distribution, these are exact, even for moving barriers, and for walks with correlated steps. The conditional distributions yield predictions for the distribution of halo concentrations at fixed mass and formation redshift. They also exhibit assembly bias like effects, even when the steps in the walks themselves are uncorrelated. Our formulae show that without prior knowledge of the physical origin of the second walk, the naive estimate of the critical density required for halo formation which is based on the statistics of the first crossing distribution will be larger than that based on the statistical distribution of walk heights at first crossing; both will be biased low compared to the value associated with the physics. Finally, we show how the predictions are modified if we add the requirement that haloes form

  19. Graph Theory-Based Approach for Stability Analysis of Stochastic Coupled Systems With Lévy Noise on Networks.

    PubMed

    Zhang, Chunmei; Li, Wenxue; Wang, Ke

    2015-08-01

    In this paper, a novel class of stochastic coupled systems with Lévy noise on networks (SCSLNNs) is presented. Both white noise and Lévy noise are considered in the networks. By exploiting graph theory and Lyapunov stability theory, criteria ensuring p th moment exponential stability and stability in probability of these SCSLNNs are established, respectively. These principles are closely related to the topology of the network and the perturbation intensity of white noise and Lévy noise. Moreover, to verify the theoretical results, stochastic coupled oscillators with Lévy noise on a network and stochastic Volterra predator-prey system with Lévy noise are performed. Finally, a numerical example about oscillators' network is provided to illustrate the feasibility of our analytical results.

  20. Geometrically consistent approach to stochastic DBI inflation

    SciTech Connect

    Lorenz, Larissa; Martin, Jerome; Yokoyama, Jun'ichi

    2010-07-15

    Stochastic effects during inflation can be addressed by averaging the quantum inflaton field over Hubble-patch-sized domains. The averaged field then obeys a Langevin-type equation into which short-scale fluctuations enter as a noise term. We solve the Langevin equation for an inflaton field with a Dirac-Born-Infeld (DBI) kinetic term perturbatively in the noise and use the result to determine the field value's probability density function (PDF). In this calculation, both the shape of the potential and the warp factor are arbitrary functions, and the PDF is obtained with and without volume effects due to the finite size of the averaging domain. DBI kinetic terms typically arise in string-inspired inflationary scenarios in which the scalar field is associated with some distance within the (compact) extra dimensions. The inflaton's accessible range of field values therefore is limited because of the extra dimensions' finite size. We argue that in a consistent stochastic approach the inflaton's PDF must vanish for geometrically forbidden field values. We propose to implement these extra-dimensional spatial restrictions into the PDF by installing absorbing (or reflecting) walls at the respective boundaries in field space. As a toy model, we consider a DBI inflaton between two absorbing walls and use the method of images to determine its most general PDF. The resulting PDF is studied in detail for the example of a quartic warp factor and a chaotic inflaton potential. The presence of the walls is shown to affect the inflaton trajectory for a given set of parameters.

  1. Symmetries of stochastic differential equations: A geometric approach

    SciTech Connect

    De Vecchi, Francesco C. Ugolini, Stefania; Morando, Paola

    2016-06-15

    A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.

  2. Multiscale stochastic approach for phase screens synthesis

    NASA Astrophysics Data System (ADS)

    Beghi, Alessandro; Cenedese, Angelo; Masiero, Andrea

    2011-07-01

    Simulating the turbulence effect on ground telescope observations is of fundamental importance for the design and test of suitable control algorithms for adaptive optics systems. In this paper we propose a multiscale approach for efficiently synthesizing turbulent phases at very high resolution. First, the turbulence is simulated at low resolution, taking advantage of a previously developed method for generating phase screens [J. Opt. Soc. Am. A25, 515 (2008)JOAOD60740-323210.1364/JOSAA.25.000515]. Then, high-resolution phase screens are obtained as the output of a multiscale linear stochastic system. The multiscale approach significantly improves the computational efficiency of turbulence simulation with respect to recently developed methods [Opt. Express14, 988 (2006)OPEXFF1094-408710.1364/OE.14.000988] [J. Opt. Soc. Am. A25, 515 (2008)JOAOD60740-323210.1364/JOSAA.25.000515] [J. Opt. Soc. Am. A25, 463 (2008)10.1364/JOSAA.25.000463JOAOD60740-3232]. Furthermore, the proposed procedure ensures good accuracy in reproducing the statistical characteristics of the turbulent phase.

  3. Two Different Approaches to Nonzero-Sum Stochastic Differential Games

    SciTech Connect

    Rainer, Catherine

    2007-06-15

    We make the link between two approaches to Nash equilibria for nonzero-sum stochastic differential games: the first one using backward stochastic differential equations and the second one using strategies with delay. We prove that, when both exist, the two notions of Nash equilibria coincide.

  4. A stochastic collocation approach for efficient integrated gear health prognosis

    NASA Astrophysics Data System (ADS)

    Zhao, Fuqiong; Tian, Zhigang; Zeng, Yong

    2013-08-01

    Uncertainty quantification in damage growth is critical in equipment health prognosis and condition based maintenance. Integrated health prognostics has recently drawn growing attention due to its capability to produce more accurate predictions through integrating physical models and real-time condition monitoring data. In the existing literature, simulation is commonly used to account for the uncertainty in prognostics, which is inefficient. In this paper, instead of using simulation, a stochastic collocation approach is developed for efficient integrated gear health prognosis. Based on generalized polynomial chaos expansion, the approach is utilized to evaluate the uncertainty in gear remaining useful life prediction as well as the likelihood function in Bayesian inference. The collected condition monitoring data are incorporated into prognostics via Bayesian inference to update the distributions of uncertainties at given inspection times. Accordingly, the distribution of the remaining useful life is updated. Compared to conventional simulation methods, the stochastic collocation approach is much more efficient, and is capable of dealing with high dimensional probability space. An example is used to demonstrate the effectiveness and efficiency of the proposed approach.

  5. The stochastic system approach for estimating dynamic treatments effect.

    PubMed

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  6. Functional integral approach for multiplicative stochastic processes.

    PubMed

    Arenas, Zochil González; Barci, Daniel G

    2010-05-01

    We present a functional formalism to derive a generating functional for correlation functions of a multiplicative stochastic process represented by a Langevin equation. We deduce a path integral over a set of fermionic and bosonic variables without performing any time discretization. The usual prescriptions to define the Wiener integral appear in our formalism in the definition of Green's functions in the Grassman sector of the theory. We also study nonperturbative constraints imposed by Becchi, Rouet and Stora symmetry (BRS) and supersymmetry on correlation functions. We show that the specific prescription to define the stochastic process is wholly contained in tadpole diagrams. Therefore, in a supersymmetric theory, the stochastic process is uniquely defined since tadpole contributions cancels at all order of perturbation theory.

  7. Three-dimensional Stochastic Estimation of Porosity Distribution: Benefits of Using Ground-penetrating Radar Velocity Tomograms in Simulated-annealing-based or Bayesian Sequential Simulation Approaches

    DTIC Science & Technology

    2012-05-30

    crosshole seismic tomography and borehole logging information. Bayesian approaches [e.g., Gelman et al., 2003] have been applied to integrate diverse...simulation [e.g., Deutsch and Journel, 1998] with the added use of Bayesian formula [e.g., Chen et al., 2001; Gelman et al., 2003]. The Bayesian...3-D STOCHASTIC ESTIMATION OF POROSITY W05553 12 of 13 Gelman , A., J. B. Carlin, H. S. Stern, and D. B. Rubin (2003), Bayesian Data Analysis, 668 pp

  8. Stochastic modelling of evaporation based on copulas

    NASA Astrophysics Data System (ADS)

    Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko

    2015-04-01

    Evapotranspiration is an important process in the water cycle that represents a considerable amount of moisture lost through evaporation from the soil surface and transpiration from plants in a watershed. Therefore, an accurate estimate of evapotranspiration rates is necessary, along with precipitation data, for running hydrological models. Often, daily reference evapotranspiration is modelled based on the Penman, Priestley-Taylor or Hargraeves equation. However, each of these models requires extensive input data, such as daily mean temperature, wind speed, relative humidity and solar radiation. Yet, in design studies, such data is unavailable in case stochastically generated time series of precipitation are used to force a hydrologic model. In the latter case, an alternative model approach is needed that allows for generating evapotranspiration data that are consistent with the accompanying precipitation data. This contribution presents such an approach in which the statistical dependence between evapotranspiration, temperature and precipitation is described by three- and four-dimensional vine copulas. Based on a case study of 72 years of evapotranspiration, temperature and precipitation data, observed in Uccle, Belgium, it was found that canonical vine copulas (C-Vines) in which bivariate Frank copulas are employed perform very well in preserving the dependencies between variables. While 4-dimensional C-Vine copulas performed best in simulating time series of evapotranspiration, a 3-dimensional C-Vine copula (relating evapotranspiration, daily precipitation depth and temperature) still allows for modelling evapotranspiration, though with larger error statistics.

  9. Phase unwrapping as an ill-posed problem: performance comparison between a neural-network-based approach and a stochastic search method

    NASA Astrophysics Data System (ADS)

    Chiaradia, Maria T.; Guerriero, Luciano; Refice, Alberto; Pasquariello, Guido; Satalino, Giuseppe; Stramaglia, Sebastiano

    1998-10-01

    2D phase unwrapping, a problem common to signal processing, optics, and interferometric radar topographic applications, consists in retrieving an absolute phase field from principal, noisy measurements. In this paper, we analyze the application of neural networks to this complex mathematical problem, formulating it as a learning-by-examples strategy, by training a multilayer perceptron to associate a proper correction pattern to the principal phase gradient configuration on local window. In spite of the high dimensionality of this problem the proposed MLP, trained on examples from simulated phase surfaces, shows to be able to correctly remove more than half the original number of pointlike inconsistencies on real noisy interferograms. Better efficiencies could be achieved by enlarging the processing window size, so as to exploit a greater amount of information. By pushing further this change of perspective, one passes from a local to a global point of view; problems of this kind are more effectively solved, rather than through learning strategies, by minimization procedures, for which we prose a powerful algorithm, based on a stochastic approach.

  10. A 3D geological model for the Ruiz-Tolima Volcanic Massif (Colombia): Assessment of geological uncertainty using a stochastic approach based on Bézier curve design

    NASA Astrophysics Data System (ADS)

    González-Garcia, Javier; Jessell, Mark

    2016-09-01

    The Ruiz-Tolima Volcanic Massif (RTVM) is an active volcanic complex in the Northern Andes, and understanding its geological structure is critical for hazard mitigation and guiding future geothermal exploration. However, the sparsity of data available to constrain the interpretation of this volcanic system hinders the application of standard 3D modelling techniques. Furthermore, some features related to the volcanic system are not entirely understood, such as the connectivity between the plutons present in its basement (i.e. Manizales Stock, El Bosque Batholith). We have developed a methodology where two independent working hypotheses were formulated and modelled independently (i.e. a case where both plutons constitute distinct bodies, and an alternative case where they form one single batholith). A Monte Carlo approach was used to characterise the geological uncertainty in each case. Bézier curve design was used to represent geological contacts on input cross sections. Systematic variations in the control points of these curves allows us to generate multiple realisations of geological interfaces, resulting in stochastic models that were grouped into suites used to apply quantitative estimators of uncertainty. This process results in a geological representation based on fuzzy logic and in maps of model uncertainty distribution. The results are consistent with expected regions of high uncertainty near under-constrained geological contacts, while the non-unique nature of the conceptual model indicates that the dominant source of uncertainty in the area is the nature of the batholith structure.

  11. Stochastic approach to efficient design of belt conveyor networks

    SciTech Connect

    Sevim, H.

    1985-07-01

    Currently, the design of belt conveyor networks is based either on deterministic production assumptions or on simulation models. In this research project, the stochastic process at the coal face is expressed and formulated by a Semi-Markovian technique, and the subject is used as input in a computerized heuristic design model. The author previously has used a Semi-Markovian process to analyze longwall and room-and-pillar production operations. Results indicated that a coal flow in the section belt of a room-and-pillar operation would be expected only 20% of the time in a steady-state operation mode. Similarly, longwall face operations indicated a 35 to 40% coal flow under steady-state conditions. In the present study, similar data from several production sections are used to compute the probabilities of different quantities of coal flowing at any given time during a shift on the belt in the submain entries. Depending upon the probabilities of coal flows on belts in sections and submain and main entries, the appropriate haulage units such as belt width, motor horsepower, idlers, etc., and belt speed are selected by a computerized model. After the development of this algorithm, now in progress, results of a case study undertaken at an existing coal mine in the Illinois Coal Basin using this stochastic approach will be compared with those obtained using an existing belt haulage system design approach.

  12. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  13. Stochastic inflation: Quantum phase-space approach

    NASA Astrophysics Data System (ADS)

    Habib, Salman

    1992-09-01

    In this paper a quantum-mechanical phase-space picture is constructed for coarse-grained free quantum fields in an inflationary universe. The appropriate stochastic quantum Liouville equation is derived. Explicit solutions for the phase-space quantum distribution function are found for the cases of power-law and exponential expansions. The expectation values of dynamical variables with respect to these solutions are compared to the corresponding cutoff regularized field-theoretic results (we do not restrict ourselves only to <Φ2>). Fair agreement is found provided the coarse-graining scale is kept within certain limits. By focusing on the full phase-space distribution function rather than a reduced distribution it is shown that the thermodynamic interpretation of the stochastic formalism faces several difficulties (e.g., there is no fluctuation-dissipation theorem). The coarse graining does not guarantee an automatic classical limit as quantum correlations turn out to be crucial in order to get results consistent with standard quantum field theory. Therefore, the method does not by itself constitute an explanation of the quantum to classical transition in the early Universe. In particular, we argue that the stochastic equations do not lead to decoherence.

  14. Stochastic Optimal Control and Linear Programming Approach

    SciTech Connect

    Buckdahn, R.; Goreac, D.; Quincampoix, M.

    2011-04-15

    We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.

  15. Up-scaling spatial heterogeneity of microbial turnover in soil using a stochastic approach

    NASA Astrophysics Data System (ADS)

    Pagel, Holger; Streck, Thilo

    2017-04-01

    Rates of microbial processes in soils show considerable spatial and temporal variability emerging from small-scale microbial-physicochemical interactions. Complexity and variability of microbial processes might be captured as stochastic system behaviour, which provides a way to upscale small-scale dynamics. To test this approach, we modeled pesticide degradation in a soil pedon as a test case. A spatially explicit approach (based on partial differential equations) is compared to a stochastic approach (based on stochastic ordinary differential equations). Scenario simulations for multiple realizations of different spatial distributions of microbes at the mm-scale are performed using the spatially explicit model. These simulation results are then used as reference data to which the stochastic model is fitted via approximate Bayesian computation for identification of parameters controlling the stochastic dynamics of the state variables. The use of multiple different summary statistics allows to analyze and evaluate limitations of the stochastic model with respect to upscaling. We will present the modeling framework and show first results.

  16. Langevin approach for stochastic Hodgkin-Huxley dynamics with discretization of channel open fraction

    NASA Astrophysics Data System (ADS)

    Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei

    2013-12-01

    The random opening and closing of ion channels establishes channel noise, which can be approximated and included into stochastic differential equations (Langevin approach). The Langevin approach is often incorporated to model stochastic ion channel dynamics for systems with a large number of channels. Here, we introduce a discretization procedure of a channel-based Langevin approach to simulate the stochastic channel dynamics with small and intermediate numbers of channels. We show that our Langevin approach with discrete channel open fractions can give a good approximation of the original Markov dynamics even for only 10 K channels. We suggest that the better approximation by the discretized Langevin approach originates from the improved representation of events that trigger action potentials.

  17. Stochastic approach to analyzing the uncertainties and possible changes in the availability of water in the future based on a climate change scenario

    NASA Astrophysics Data System (ADS)

    Oliveira, G. G.; Pedrollo, O. C.; Castro, N. M. R.

    2015-04-01

    The objective of this study was to analyze the changes and uncertainties related to water availability in the future (for purposes of this study, it was adopted the period between 2011 and 2040), using a stochastic approach, taking as reference a climate projection from the climate model Eta CPTEC/HadCM3. The study was applied to the Ijuí river basin in the south of Brazil. The set of methods adopted involved, among others, correcting the climatic variables projected for the future, hydrological simulation using Artificial Neural Networks to define a number of monthly flows and stochastic modeling to generate 1000 hydrological series with equal probability of occurrence. A multiplicative type stochastic model was developed in which monthly flow is the result of the product of four components: (i) long term trend component; (ii) cyclic or seasonal component; (iii) time dependency component; (iv) random component. In general the results showed a trend to increased flows. The mean flow for a long period, for instance, presented an alteration from 141.6 (1961-1990) to 200.3 m3 s-1 (2011-2040). An increment in mean flow and in the monthly SD was also observed between the months of January and October. Between the months of February and June, the percentage of mean monthly flow increase was more marked, surpassing the 100 % index. Considering the confidence intervals in the flow estimates for the future, it can be concluded that there is a tendency to increase the hydrological variability during the period between 2011-2040, which indicates the possibility of occurrence of time series with more marked periods of droughts and floods.

  18. Stochastic approach to analyzing the uncertainties and possible changes in the availability of water in the future based on scenarios of climate change

    NASA Astrophysics Data System (ADS)

    Oliveira, G. G.; Pedrollo, O. C.; Castro, N. M. R.

    2015-08-01

    The objective of this study was to analyze the changes and uncertainties related to water availability in the future (for the purposes of this study, the period between 2011 and 2040 was adopted), using a stochastic approach, taking as reference a climate projection from climate model Eta CPTEC/HadCM3. The study was applied to the Ijuí River basin in the south of Brazil. The set of methods adopted involved, among others, correcting the climatic variables projected for the future, hydrological simulation using artificial neural networks (ANNs) to define a number of monthly flows and stochastic modeling to generate 1000 hydrological series with equal probability of occurrence. A multiplicative type stochastic model was developed in which monthly flow is the result of the product of four components: (i) long-term trend component; (ii) cyclic or seasonal component; (iii) time-dependency component; and (iv) random component. In general, the results showed a trend to increased flows. The mean flow for a long period, for instance, presented an alteration from 141.6 m3 s-1 (1961-1990) to 200.3 m3 s-1 (2011-2040). An increment in mean flow and in the monthly standard deviation was also observed between the months of January and October. Between the months of February and June, the percentage of mean monthly flow increase was more marked, surpassing the 100 % index. Considering the confidence intervals in the flow estimates for the future, it can be concluded that there is a tendency to increase the hydrological variability during the period between 2011 and 2040, which indicates the possibility of occurrence of time series with more marked periods of droughts and floods.

  19. A stochastic bioenergetics model based approach to translating large river flow and temperature in to fish population responses: the pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Dey, Rima; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.

    2015-01-01

    In managing fish populations, especially at-risk species, realistic mathematical models are needed to help predict population response to potential management actions in the context of environmental conditions and changing climate while effectively incorporating the stochastic nature of real world conditions. We provide a key component of such a model for the endangered pallid sturgeon (Scaphirhynchus albus) in the form of an individual-based bioenergetics model influenced not only by temperature but also by flow. This component is based on modification of a known individual-based bioenergetics model through incorporation of: the observed ontogenetic shift in pallid sturgeon diet from marcroinvertebrates to fish; the energetic costs of swimming under flowing-water conditions; and stochasticity. We provide an assessment of how differences in environmental conditions could potentially alter pallid sturgeon growth estimates, using observed temperature and velocity from channelized portions of the Lower Missouri River mainstem. We do this using separate relationships between the proportion of maximum consumption and fork length and swimming cost standard error estimates for fish captured above and below the Kansas River in the Lower Missouri River. Critical to our matching observed growth in the field with predicted growth based on observed environmental conditions was a two-step shift in diet from macroinvertebrates to fish.

  20. Effect of adsorption on solute dispersion: a microscopic stochastic approach.

    PubMed

    Hlushkou, Dzmitry; Gritti, Fabrice; Guiochon, Georges; Seidel-Morgenstern, Andreas; Tallarek, Ulrich

    2014-05-06

    We report on results obtained with a microscopic modeling approach to Taylor-Aris dispersion in a tube coupled with adsorption-desorption processes at its inner surface. The retention factor of an adsorbed solute is constructed by independent adjustment of the adsorption probability and mean adsorption sojourn time. The presented three-dimensional modeling approach can realize any microscopic model of the adsorption kinetics based on a distribution of adsorption sojourn times expressed in analytical or numerical form. We address the impact of retention factor, adsorption probability, and distribution function for adsorption sojourn times on solute dispersion depending on the average flow velocity. The approach is general and validated at all stages (no sorption; sorption with fast interfacial mass transfer; sorption with slow interfacial mass transfer) using available analytical results for transport in Poiseuille flow through simple geometries. Our results demonstrate that the distribution function for adsorption sojourn times is a key parameter affecting dispersion and show that models of advection-diffusion-sorption cannot describe mass transport without specifying microscopic details of the sorption process. In contrast to previous one-dimensional stochastic models, the presented simulation approach can be applied as well to study systems where diffusion is a rate-controlling process for adsorption.

  1. A Comparison of stochastic and hybrid based weather generators

    NASA Astrophysics Data System (ADS)

    Fatehi, Iman; Mirdar Soltani, Shiva; Bárdossy, András

    2017-04-01

    Climate change modeling is obviously one of the fundamental basis for further environmental studies such as hydrological modeling, flood forecasting, watershed planning, etc. However, Global Circulation Models (GCMs) provide possible climate change scenarios, nevertheless even if they are run at a high resolution, which they are not, it is still necessary to downscale their results before employing them for local impact studies. Downscaling approaches are typically categorized mainly into four types; dynamical, weather typing, stochastic weather generators and transfer function-based approaches. The accuracy of two types of weather generators is evaluated in this study, Long Ashton Research Station-Weather Generator (LARS-WG) and Statistical Down Scaling Model (SDSM), stochastic and hybrid of the transfer function and stochastic-based weather generators, respectively. Therefore, these weather generators have been employed to simulate three daily climate parameters, including; precipitation, minimum and maximum temperature data between 1990 to 2010 in Guilan province of Iran. Subsequently, modeling performances have been evaluated, applying Akaike information criterion (AIC) and Bayesian information criterion (BIC). According to the calculated AIC and BIC values, LARS-WG has performed slightly more reliable in simulating the daily precipitation data and significantly better in simulating the minimum and maximum daily temperatures. Despite these results, is it adequate for a conclusion to prefer stochastic-based weather generators rather than the other? Indeed, more considerations are required to investigate the preferable downscaling approach. Hence, some statistical coefficients, i.e. coefficient of determination (R-squared) and correlation coefficient have also been employed to evaluate the simulation performances in more detail by investigating the correlation between individual daily simulated data, variances and daily maximas and minimas in comparison to the

  2. Computational approaches to stochastic systems in physics and biology

    NASA Astrophysics Data System (ADS)

    Jeraldo Maldonado, Patricio Rodrigo

    In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the

  3. Evaluating the role of soil variability on groundwater pollution and recharge at regional scale by integrating a process-based vadose zone model in a stochastic approach

    NASA Astrophysics Data System (ADS)

    Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi

    2013-04-01

    modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by

  4. A spectral approach for damage quantification in stochastic dynamic systems

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Santos, J. M. C. Dos

    2017-05-01

    Intrinsic to all real structures, parameter uncertainty can be found in material properties and geometries. Many structural parameters, such as, elastic modulus, Poisson's rate, thickness, density, etc., are spatially distributed by nature. The Karhunen-Loève expansion is a method used to model the random field expanded in a spectral decomposition. Once many structural parameters can not be modelled as a Gaussian distribution the memoryless nonlinear transformation is used to translate a Gaussian random field in a non-Gaussian. Thus, stochastic methods have been used to include these uncertainties in the structural model. The Spectral Element Method (SEM) is a wave-based numerical approach used to model structures. It is also developed to express parameters as spatially correlated random field in its formulation. In this paper, the problem of structural damage detection under the presence of spatially distributed random parameter is addressed. Explicit equations to localize and assess damage are proposed based on the SEM formulation. Numerical examples in an axially vibrating undamaged and damaged structure with distributed parameters are analysed.

  5. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    SciTech Connect

    Ma, Xiao; Dong, Jin; Djouadi, Seddik M; Nutaro, James J; Kuruganti, Teja

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  6. A probabilistic graphical model based stochastic input model construction

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2014-09-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.

  7. Implications of a stochastic approach to air-quality regulations

    SciTech Connect

    Witten, A.J.; Kornegay, F.C.; Hunsaker, D.B. Jr.; Long, E.C. Jr.; Sharp, R.D.; Walsh, P.J.; Zeighami, E.A.; Gordon, J.S.; Lin, W.L.

    1982-09-01

    This study explores the viability of a stochastic approach to air quality regulations. The stochastic approach considered here is one which incorporates the variability which exists in sulfur dioxide (SO/sub 2/) emissions from coal-fired power plants. Emission variability arises from a combination of many factors including variability in the composition of as-received coal such as sulfur content, moisture content, ash content, and heating value, as well as variability which is introduced in power plant operations. The stochastic approach as conceived in this study addresses variability by taking the SO/sub 2/ emission rate to be a random variable with specified statistics. Given the statistical description of the emission rate and known meteorological conditions, it is possible to predict the probability of a facility exceeding a specified emission limit or violating an established air quality standard. This study also investigates the implications of accounting for emissions variability by allowing compliance to be interpreted as an allowable probability of occurrence of given events. For example, compliance with an emission limit could be defined as the probability of exceeding a specified emission value, such as 1.2 lbs SO/sub 2//MMBtu, being less than 1%. In contrast, compliance is currently taken to mean that this limit shall never be exceeded, i.e., no exceedance probability is allowed. The focus of this study is on the economic benefits offered to facilities through the greater flexibility of the stochastic approach as compared with possible changes in air quality and health effects which could result.

  8. A Spatial Clustering Approach for Stochastic Fracture Network Modelling

    NASA Astrophysics Data System (ADS)

    Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.

    2014-07-01

    Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach

  9. Stochastic model updating utilizing Bayesian approach and Gaussian process model

    NASA Astrophysics Data System (ADS)

    Wan, Hua-Ping; Ren, Wei-Xin

    2016-03-01

    Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.

  10. A new stochastic systems approach to structural integrity

    NASA Technical Reports Server (NTRS)

    Provan, James W.; Farhangdoost, Khalil

    1994-01-01

    This paper develops improved stochastic models for the description of a large variety of fatigue crack growth phenomena that occur in components of considerable importance to the functionality and reliability of complex engineering structures. In essence, the models are based on the McGill-Markov and Closure-Lognormal stochastic processes. Not only do these models have the capability of predicting the statistical dispersion of crack growth rates, they also, by incorporating the concept of crack closure, have the capability of transferring stochastic crack growth properties measured under ideal laboratory conditions to situations of industrial significance, such as those occurring under adverse loading and/or environmental conditions. The primary data required in order to be in a position to estimate the pertinent parameters of these stochastic models are obtained from a statistically significant number of replicate tests. In this paper, both the theory and the experimental technique are illustrated using a Ti-6Al-4V alloy. Finally, important structural integrity, reliability, availability and maintainability concepts are developed and illustrated.

  11. Stochastic physical ecohydrologic-based model for estimating irrigation requirement

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Climate uncertainty affects both natural and managed hydrological systems. Therefore, methods which could take this kind of uncertainty into account are of primal importance for management of ecosystems, especially agricultural ecosystems. One of the famous problems in these ecosystems is crop water requirement estimation under climatic uncertainty. Both deterministic physically-based methods and stochastic time series modeling have been utilized in the literature. Like other fields of hydroclimatic sciences, there is a vast area in irrigation process modeling for developing approaches integrating physics of the process and statistics aspects. This study is about deriving closed-form expressions for probability density function (p.d.f.) of irrigation water requirement using a stochastic physically-based model, which considers important aspects of plant, soil, atmosphere and irrigation technique and policy in a coherent framework. An ecohydrologic stochastic model, building upon the stochastic differential equation of soil moisture dynamics at root zone, is employed as a basis for deriving the expressions considering temporal stochasticity of rainfall. Due to distinguished nature of stochastic processes of micro and traditional irrigation applications, two different methodologies have been used. Micro-irrigation application has been modeled through dichotomic process. Chapman-Kolomogrov equation of time integral of the dichotomic process for transient condition has been solved to derive analytical expressions for probability density function of seasonal irrigation requirement. For traditional irrigation, irrigation application during growing season has been modeled using a marked point process. Using the renewal theory, probability mass function of seasonal irrigation requirement, which is a discrete-value quantity, has been analytically derived. The methodology deals with estimation of statistical properties of the total water requirement in a growing season that

  12. Langevin approach with rescaled noise for stochastic channel dynamics in Hodgkin-Huxley neurons

    NASA Astrophysics Data System (ADS)

    Huang, Yan-Dong; Xiang, Li; Shuai, Jian-Wei

    2015-12-01

    The Langevin approach has been applied to model the random open and closing dynamics of ion channels. It has long been known that the gate-based Langevin approach is not sufficiently accurate to reproduce the statistics of stochastic channel dynamics in Hodgkin-Huxley neurons. Here, we introduce a modified gate-based Langevin approach with rescaled noise strength to simulate stochastic channel dynamics. The rescaled independent gate and identical gate Langevin approaches improve the statistical results for the mean membrane voltage, inter-spike interval, and spike amplitude. Project supported by the National Natural Science Foundation for Distinguished Young Scholars of China (Grant No. 11125419), the National Natural Science Foundation of China (Grant No. 10925525), and the Funds for the Leading Talents of Fujian Province, China.

  13. An approach to the residence time distribution for stochastic multi-compartment models.

    PubMed

    Yu, Jihnhee; Wehrly, Thomas E

    2004-10-01

    Stochastic compartmental models are widely used in modeling processes such as drug kinetics in biological systems. This paper considers the distribution of the residence times for stochastic multi-compartment models, especially systems with non-exponential lifetime distributions. The paper first derives the moment generating function of the bivariate residence time distribution for the two-compartment model with general lifetimes and approximates the density of the residence time using the saddlepoint approximation. Then, it extends the distributional approach to the residence time for multi-compartment semi-Markov models combining the cofactor rule for a single destination and the analytic approach to the two-compartment model. This approach provides a complete specification of the residence time distribution based on the moment generating function and thus facilitates an easier calculation of high-order moments than the approach using the coefficient matrix. Applications to drug kinetics demonstrate the simplicity and usefulness of this approach.

  14. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    NASA Astrophysics Data System (ADS)

    Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil

    2016-11-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  15. Stochasticity in physiologically based kinetics models: implications for cancer risk assessment.

    PubMed

    Péry, Alexandre Roger Raymond; Bois, Frederic Yves

    2009-08-01

    In case of low-dose exposure to a substance, its concentration in cells is likely to be stochastic. Assessing the consequences of this stochasticity in toxicological risk assessment requires the coupling of macroscopic dynamics models describing whole-body kinetics with microscopic tools designed to simulate stochasticity. In this article, we propose an approach to approximate stochastic cell concentration of butadiene in the cells of diverse organs. We adapted the dynamics equations of a physiologically based pharmacokinetic (PBPK) model and used a stochastic simulator for the system of equations that we derived. We then coupled kinetics simulations with a deterministic hockey stick model of carcinogenicity. Stochasticity induced substantial modifications relative to dose-response curve, compared with the deterministic situation. In particular, there was nonlinearity in the response and the stochastic apparent threshold was lower than the deterministic one. The approach that we developed could easily be extended to other biological studies to assess the influence of stochasticity at macroscopic scale for compound dynamics at the cell level.

  16. Neural Network-Based Solutions for Stochastic Optimal Control Using Path Integrals.

    PubMed

    Rajagopal, Karthikeyan; Balakrishnan, Sivasubramanya Nadar; Busemeyer, Jerome R

    2017-03-01

    In this paper, an offline approximate dynamic programming approach using neural networks is proposed for solving a class of finite horizon stochastic optimal control problems. There are two approaches available in the literature, one based on stochastic maximum principle (SMP) formalism and the other based on solving the stochastic Hamilton-Jacobi-Bellman (HJB) equation. However, in the presence of noise, the SMP formalism becomes complex and results in having to solve a couple of backward stochastic differential equations. Hence, current solution methodologies typically ignore the noise effect. On the other hand, the inclusion of noise in the HJB framework is very straightforward. Furthermore, the stochastic HJB equation of a control-affine nonlinear stochastic system with a quadratic control cost function and an arbitrary state cost function can be formulated as a path integral (PI) problem. However, due to curse of dimensionality, it might not be possible to utilize the PI formulation for obtaining comprehensive solutions over the entire operating domain. A neural network structure called the adaptive critic design paradigm is used to effectively handle this difficulty. In this paper, a novel adaptive critic approach using the PI formulation is proposed for solving stochastic optimal control problems. The potential of the algorithm is demonstrated through simulation results from a couple of benchmark problems.

  17. Barkhausen discontinuities and hysteresis of ferromagnetics: New stochastic approach

    SciTech Connect

    Vengrinovich, Valeriy

    2014-02-18

    The magnetization of ferromagnetic material is considered as periodically inhomogeneous Markov process. The theory assumes both statistically independent and correlated Barkhausen discontinuities. The model, based on the chain evolution-type process theory, assumes that the domain structure of a ferromagnet passes successively the steps of: linear growing, exponential acceleration and domains annihilation to zero density at magnetic saturation. The solution of stochastic differential Kolmogorov equation enables the hysteresis loop calculus.

  18. Nonlinear Aeroelastic Analysis of UAVs: Deterministic and Stochastic Approaches

    NASA Astrophysics Data System (ADS)

    Sukut, Thomas Woodrow

    Aeroelastic aspects of unmanned aerial vehicles (UAVs) is analyzed by treatment of a typical section containing geometrical nonlinearities. Equations of motion are derived and numerical integration of these equations subject to quasi-steady aerodynamic forcing is performed. Model properties are tailored to a high-altitude long-endurance unmanned aircraft. Harmonic balance approximation is employed based on the steady-state oscillatory response of the aerodynamic forcing. Comparisons are made between time integration results and harmonic balance approximation. Close agreement between forcing and displacement oscillatory frequencies is found. Amplitude agreement is off by a considerable margin. Additionally, stochastic forcing effects are examined. Turbulent flow velocities generated from the von Karman spectrum are applied to the same nonlinear structural model. Similar qualitative behavior is found between quasi-steady and stochastic forcing models illustrating the importance of considering the non-steady nature of atmospheric turbulence when operating near critical flutter velocity.

  19. GIS-based approach for optimal siting and sizing of renewables considering techno-environmental constraints and the stochastic nature of meteorological inputs

    NASA Astrophysics Data System (ADS)

    Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2016-04-01

    Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.

  20. Removal of muscle artifact from EEG data: comparison between stochastic (ICA and CCA) and deterministic (EMD and wavelet-based) approaches

    NASA Astrophysics Data System (ADS)

    Safieddine, Doha; Kachenoura, Amar; Albera, Laurent; Birot, Gwénaël; Karfoul, Ahmad; Pasnicu, Anca; Biraben, Arnaud; Wendling, Fabrice; Senhadji, Lotfi; Merlet, Isabelle

    2012-12-01

    Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that

  1. Stochastic pumping of heat: approaching the Carnot efficiency.

    PubMed

    Segal, Dvira

    2008-12-31

    Random noise can generate a unidirectional heat current across asymmetric nano-objects in the absence (or against) a temperature gradient. We present a minimal model for a molecular-level stochastic heat pump that may operate arbitrarily close to the Carnot efficiency. The model consists a fluctuating molecular unit coupled to two solids characterized by distinct phonon spectral properties. Heat pumping persists for a broad range of system and bath parameters. Furthermore, by filtering the reservoirs' phonons the pump efficiency can approach the Carnot limit.

  2. Analytic approaches to stochastic gene expression in multicellular systems.

    PubMed

    Boettiger, Alistair Nicol

    2013-12-17

    Deterministic thermodynamic models of the complex systems, which control gene expression in metazoa, are helping researchers identify fundamental themes in the regulation of transcription. However, quantitative single cell studies are increasingly identifying regulatory mechanisms that control variability in expression. Such behaviors cannot be captured by deterministic models and are poorly suited to contemporary stochastic approaches that rely on continuum approximations, such as Langevin methods. Fortunately, theoretical advances in the modeling of transcription have assembled some general results that can be readily applied to systems being explored only through a deterministic approach. Here, I review some of the recent experimental evidence for the importance of genetically regulating stochastic effects during embryonic development and discuss key results from Markov theory that can be used to model this regulation. I then discuss several pairs of regulatory mechanisms recently investigated through a Markov approach. In each case, a deterministic treatment predicts no difference between the mechanisms, but the statistical treatment reveals the potential for substantially different distributions of transcriptional activity. In this light, features of gene regulation that seemed needlessly complex evolutionary baggage may be appreciated for their key contributions to reliability and precision of gene expression.

  3. A simple approach for stochastic generation of spatial rainfall patterns

    NASA Astrophysics Data System (ADS)

    Tarpanelli, A.; Franchini, M.; Brocca, L.; Camici, S.; Melone, F.; Moramarco, T.

    2012-11-01

    SummaryRainfall scenarios are of considerable interest for design flood and flood risk analysis. To this end, the stochastic generation of continuous rainfall sequences is often coupled with the continuous hydrological modelling. In this context, the spatial and the temporal rainfall variability represents a significant issue, especially for basins in which the rainfall field cannot be approximated through the use of a single station. Therefore, methodologies for the spatially and temporally correlated rainfall generation are welcome. An example of such a methodology is the well-established Spatial-Temporal Neyman-Scott Rectangular Pulse (STNSRP), a modification of the single-site Neyman-Scott Rectangular Pulse (NSRP) approach, designed to incorporate specific features to reproduce the rainfall spatial cross-correlation. In order to provide a simple alternative to the STNSRP, a new method of generating synthetic rainfall time series with pre-set spatial-temporal correlation is proposed herein. This approach relies on the single-site NSRP model, which is used to generate synthetic hourly independent rainfall time series at each rain gauge station with the required temporal autocorrelation (and several other appropriately selected statistics). The rank correlation method of Iman and Conover (IC) is then applied to these synthetic rainfall time series in order to introduce the same spatial cross-correlation that exists between the observed time series. This combination of the NSRP model with the IC method consents the reproduction of the observed spatial-temporal variability of a rainfall field. In order to verify the proposed procedure, four sub-basins of the Upper Tiber River basin are investigated whose basin areas range from 165 km2 to 2040 km2. Results show that the procedure is able to preserve both the rainfall temporal autocorrelation at single site and the rainfall spatial cross-correlation at basin scale, and its performance is comparable with that of the

  4. Approaching complexity by stochastic methods: From biological systems to turbulence

    NASA Astrophysics Data System (ADS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-09-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  5. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  6. A simple approach for stochastic generation of spatial rainfall patterns

    NASA Astrophysics Data System (ADS)

    Tarpanelli, Angelica; Franchini, Marco; Camici, Stefania; Brocca, Luca; Melone, Florisa; Moramarco, Tommaso

    2010-05-01

    The high floods occurred in the last years in many regions of the world have increased the interest of local, national and international authorities on the flood and risk assessment. In this context, the estimation of the design flood to be adopted represents a crucial factor, mainly for ungauged or poorly gauged catchments where sufficiently long discharge time series are missing. Due to the wider availability of rainfall data, rainfall-runoff models represent a possible tool to reduce the relevant uncertainty involved in the flood frequency analysis. Recently, new methodologies based on the stochastic generation of rainfall and temperature data have been proposed. The inferred information can be used as input for a continuous hydrological model to generate a synthetic time series of discharge and, hence, the flood frequency distribution at a given site. As far as the rainfall generation is concerned, for catchments of limited size, a single site model, as the Neyman-Scott Rectangular Pulses (NSRP), can be applied. It is characterized by a flexible structure in which the model parameters are broadly related to the underlying physical features observed in the rainfall field and the statistical properties of rainfall time series over a range of time scales are preserved. However, when larger catchments are considered, an extension into the two-dimensional space is required. This issue can be addressed by using the Spatial-Temporal Neyman-Scott Rectangular Pulses (STNSRP) model that, however, is not easy to be applied and requires a high computational effort. Therefore, simple techniques to obtain a spatial rainfall pattern starting from the more simple single-site NSRP are welcome. In this study, in order to take account of the spatial correlation that is needed when spatial rainfall patterns should be generated, the practical method of the rank correlation proposed by Iman and Conover (IC), was applied. The method is able to introduce a desired level of correlation

  7. Stochastic uncertainty analysis for solute transport in randomly heterogeneous media using a Karhunen-Loève-based moment equation approach

    USGS Publications Warehouse

    Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao

    2007-01-01

    A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen-Loève-based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen-Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two-dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.

  8. Fokker-Planck approach to stochastic delay differential equations

    NASA Astrophysics Data System (ADS)

    Guillouzic, Steve

    2001-10-01

    Models written in terms of stochastic delay differential equations (SDDE's) have recently appeared in a number of fields, such as physiology, optics, and climatology. Unfortunately, the development of a Fokker-Planck approach for these equations is being hampered by their non-Markovian nature. In this thesis, an exact Fokker- Planck equation (FPE) is formulated for univariate SDDE's involving Gaussian white noise. Although this FPE is not self-sufficient, it is found to be helpful in at least two different contexts: with a short delay approximation and under an appropriate separation of time scales. In the short delay approximation, a Taylor expansion is applied to an SDDE with nondelayed diffusion and yields a nondelayed stochastic differential equation. The aforementioned FPE then allows the derivation of an alternate and complementary approximation of the original SDDE. This method is illustrated with linear and logistic SDDE's. Under the separation of time scales assumption, the FPE of a bistable system is reduced to a form that is uniquely determined by the steady-state probability density when the diffusion term of the SDDE is nondelayed. In the context of an overdamped particle with delayed coupling to a symmetrical and stochastically driven potential, the resulting FPE is used with standard techniques to express the transition rate between wells in terms of the noise amplitude and of the steady-state probability density. The same is also accomplished for the mean first passage time from one point to another. This whole approach is then applied to the case of a quartic potential, for which all realisations eventually stabilise on an oscillatory trajectory with an ever increasing amplitude. Although this latter phenomenon prevents the existence of a steady-state limit, a pseudo- steady-state probability density can be defined and used instead of the non-existent steady-state one when the transition rate to these unbounded oscillatory trajectories is

  9. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  10. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-01-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  11. Stochastic control approaches for sensor management in search and exploitation

    NASA Astrophysics Data System (ADS)

    Hitchings, Darin Chester

    new lower bound on the performance of adaptive controllers in these scenarios, develop algorithms for computing solutions to this lower bound, and use these algorithms as part of a RH controller for sensor allocation in the presence of moving objects We also consider an adaptive Search problem where sensing actions are continuous and the underlying measurement space is also continuous. We extend our previous hierarchical decomposition approach based on performance bounds to this problem and develop novel implementations of Stochastic Dynamic Programming (SDP) techniques to solve this problem. Our algorithms are nearly two orders of magnitude faster than previously proposed approaches and yield solutions of comparable quality. For supervisory control, we discuss how human operators can work with and augment robotic teams performing these tasks. Our focus is on how tasks are partitioned among teams of robots and how a human operator can make intelligent decisions for task partitioning. We explore these questions through the design of a game that involves robot automata controlled by our algorithms and a human supervisor that partitions tasks based on different levels of support information. This game can be used with human subject experiments to explore the effect of information on quality of supervisory control.

  12. Majorana approach to the stochastic theory of line shapes

    NASA Astrophysics Data System (ADS)

    Komijani, Yashar; Coleman, Piers

    2016-08-01

    Motivated by recent Mössbauer experiments on strongly correlated mixed-valence systems, we revisit the Kubo-Anderson stochastic theory of spectral line shapes. Using a Majorana representation for the nuclear spin we demonstrate how to recast the classic line-shape theory in a field-theoretic and diagrammatic language. We show that the leading contribution to the self-energy can reproduce most of the observed line-shape features including splitting and line-shape narrowing, while the vertex and the self-consistency corrections can be systematically included in the calculation. This approach permits us to predict the line shape produced by an arbitrary bulk charge fluctuation spectrum providing a model-independent way to extract the local charge fluctuation spectrum of the surrounding medium. We also derive an inverse formula to extract the charge fluctuation from the measured line shape.

  13. A Model of Bone Remodelling Based on Stochastic Resonance

    NASA Astrophysics Data System (ADS)

    Rusconi, M.; Zaikin, A.; Marwan, N.; Kurths, J.

    2008-06-01

    One of the most crucial medical challenges for long-term space flights is the prevention of bone loss affecting astronauts and its dramatic consequences on their return to gravitational field. Recently, a new noise-induced phenomenon in bone formation has been reported experimentally [1]. With this contribution we propose a model for this findings based on Stochastic Resonance [2]. Our simulations suggest new countermeasures for bone degeneration during long space fights using the effect of Stochastic Resonance.

  14. A microprocessor-based multichannel subsensory stochastic resonance electrical stimulator.

    PubMed

    Chang, Gwo-Ching

    2013-01-01

    Stochastic resonance electrical stimulation is a novel intervention which provides potential benefits for improving postural control ability in the elderly, those with diabetic neuropathy, and stroke patients. In this paper, a microprocessor-based subsensory white noise electrical stimulator for the applications of stochastic resonance stimulation is developed. The proposed stimulator provides four independent programmable stimulation channels with constant-current output, possesses linear voltage-to-current relationship, and has two types of stimulation modes, pulse amplitude and width modulation.

  15. The meso-structured magnetic atmosphere. A stochastic polarized radiative transfer approach

    NASA Astrophysics Data System (ADS)

    Carroll, T. A.; Kopf, M.

    2007-06-01

    We present a general radiative transfer model which allows the Zeeman diagnostics of complex and unresolved solar magnetic fields. Present modeling techniques still rely to a large extent on a-priori assumptions about the geometry of the underlying magnetic field. In an effort to obtain a more flexible and unbiased approach we pursue a rigorous statistical description of the underlying atmosphere. Based on a Markov random field model the atmospheric structures are characterized in terms of probability densities and spatial correlations. This approach allows us to derive a stochastic transport equation for polarized light valid in a regime with an arbitrary fluctuating magnetic field on finite scales. One of the key ingredients of the derived stochastic transfer equation is the correlation length which provides an additional degree of freedom to the transport equation and can be used as a diagnostic parameter to estimate the characteristic length scale of the underlying magnetic field. It is shown that the stochastic transfer equation represents a natural extension of the (polarized) line formation under the micro- and macroturbulent assumption and contains both approaches as limiting cases. In particular, we show how in an inhomogeneous atmosphere asymmetric Stokes profiles develop and that the correlation length directly controls the degree of asymmetry and net circular polarization (NCP). In a number of simple numerical model calculations we demonstrate the importance of a finite correlation length for the polarized line formation and its impact on the resulting Stokes line profiles. Appendices are only available in electronic form at http://www.aanda.org

  16. ENISI SDE: A New Web-Based Tool for Modeling Stochastic Processes.

    PubMed

    Mei, Yongguo; Carbo, Adria; Hoops, Stefan; Hontecillas, Raquel; Bassaganya-Riera, Josep

    2015-01-01

    Modeling and simulations approaches have been widely used in computational biology, mathematics, bioinformatics and engineering to represent complex existing knowledge and to effectively generate novel hypotheses. While deterministic modeling strategies are widely used in computational biology, stochastic modeling techniques are not as popular due to a lack of user-friendly tools. This paper presents ENISI SDE, a novel web-based modeling tool with stochastic differential equations. ENISI SDE provides user-friendly web user interfaces to facilitate adoption by immunologists and computational biologists. This work provides three major contributions: (1) discussion of SDE as a generic approach for stochastic modeling in computational biology; (2) development of ENISI SDE, a web-based user-friendly SDE modeling tool that highly resembles regular ODE-based modeling; (3) applying ENISI SDE modeling tool through a use case for studying stochastic sources of cell heterogeneity in the context of CD4+ T cell differentiation. The CD4+ T cell differential ODE model has been published [8] and can be downloaded from biomodels.net. The case study reproduces a biological phenomenon that is not captured by the previously published ODE model and shows the effectiveness of SDE as a stochastic modeling approach in biology in general and immunology in particular and the power of ENISI SDE.

  17. Stochastic light-cone CTMRG: a new DMRG approach to stochastic models

    NASA Astrophysics Data System (ADS)

    Kemper, A.; Gendiar, A.; Nishino, T.; Schadschneider, A.; Zittartz, J.

    2003-01-01

    We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 105 shows a considerable improvement on the old stochastic TMRG algorithm.

  18. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    SciTech Connect

    Samin, Adib J.

    2016-05-15

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  19. Stochastic approach to modelling of near-periodic jumping loads

    NASA Astrophysics Data System (ADS)

    Racic, V.; Pavic, A.

    2010-11-01

    A mathematical model has been developed to generate stochastic synthetic vertical force signals induced by a single person jumping. The model is based on a unique database of experimentally measured individual jumping loads which has the most extensive range of possible jumping frequencies. The ability to replicate many of the temporal and spectral features of real jumping loads gives this model a definite advantage over the conventional half-sine models coupled with Fourier series analysis. This includes modelling of the omnipresent lack of symmetry of individual jumping pulses and jump-by-jump variations in amplitudes and timing. The model therefore belongs to a new generation of synthetic narrow band jumping loads which simulate reality better. The proposed mathematical concept for characterisation of near-periodic jumping pulses may be utilised in vibration serviceability assessment of civil engineering assembly structures, such as grandstands, spectator galleries, footbridges and concert or gym floors, to estimate more realistically dynamic structural response due to people jumping.

  20. Stabilizing stochastically-forced oscillation generators with hard excitement: a confidence-domain control approach

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Chen, Guanrong; Ryashko, Lev

    2013-10-01

    In this paper, noise-induced destruction of self-sustained oscillations is studied for a stochastically-forced generator with hard excitement. The problem is to design a feedback regulator that can stabilize a limit cycle of the closed-loop system and to provide a required dispersion of the generated oscillations. The approach is based on the stochastic sensitivity function (SSF) technique and confidence domain method. A theory about the synthesis of assigned SSF is developed. For the case when this control problem is ill-posed, a regularization method is constructed. The effectiveness of the new method of confidence domain is demonstrated by stabilizing auto-oscillations in a randomly-forced generator with hard excitement.

  1. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    USGS Publications Warehouse

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured

  2. A stochastic approach to the solution of magnetohydrodynamic equations

    SciTech Connect

    Floriani, E.; Vilela Mendes, R.

    2013-06-01

    The construction of stochastic solutions is a powerful method to obtain localized solutions in configuration or Fourier space and for parallel computation with domain decomposition. Here a stochastic solution is obtained for the magnetohydrodynamics equations. Some details are given concerning the numerical implementation of the solution which is illustrated by an example of generation of long-range magnetic fields by a velocity source.

  3. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach.

    PubMed

    Plehn, Thomas; May, Volkhard

    2017-01-21

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  4. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach

    NASA Astrophysics Data System (ADS)

    Plehn, Thomas; May, Volkhard

    2017-01-01

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  5. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  6. K-means algorithm based on stochastic distances for polarimetric synthetic aperture radar image classification

    NASA Astrophysics Data System (ADS)

    Negri, Rogério Galante; da Silva, Wagner Barreto; Mendes, Tatiana Sussel Gonçalves

    2016-10-01

    The availability of polarimetric synthetic aperture radar (PolSAR) images has increased, and consequently, the classification of such images has received immense attention. Among different classification methods in the literature, it is possible to distinguish them according to learning paradigm and approach. Unsupervised methods have as advantage the independence of labeled data for training. Regarding the approach, image classification can be performed based on its individual pixels or on previously identified regions in the image. Previous studies verified that the region-based classification of PolSAR images using stochastic distances can produce better results in comparison with the pixel-based. Faced with the independence of training data by unsupervised methods and the potential of the region-based approach with stochastic distances, this study proposes a version of the unsupervised K-means algorithm for PolSAR region-based classification based on stochastic distances. The Bhattacharyya stochastic distance between Wishart distributions was adopted to measure the dissimilarity among regions of the PolSAR image. Additionally, a measure was proposed to compare unsupervised classification results. Two case studies that consider real and simulated images were conducted, and the results showed that the proposed version of K-means achieves higher accuracy values in comparison with the classic version.

  7. Solving the problem of negative populations in approximate accelerated stochastic simulations using the representative reaction approach.

    PubMed

    Kadam, Shantanu; Vanka, Kumar

    2013-02-15

    Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations.

  8. Links between deterministic and stochastic approaches for invasion in growth-fragmentation-death models.

    PubMed

    Campillo, Fabien; Champagnat, Nicolas; Fritsch, Coralie

    2016-12-01

    We present two approaches to study invasion in growth-fragmentation-death models. The first one is based on a stochastic individual based model, which is a piecewise deterministic branching process with a continuum of types, and the second one is based on an integro-differential model. The invasion of the population is described by the survival probability for the former model and by an eigenproblem for the latter one. We study these two notions of invasion fitness, giving different characterizations of the growth of the population, and we make links between these two complementary points of view. In particular we prove that the two approaches lead to the same criterion of possible invasion. Based on Krein-Rutman theory, we also give a proof of the existence of a solution to the eigenproblem, which satisfies the conditions needed for our study of the stochastic model, hence providing a set of assumptions under which both approaches can be carried out. Finally, we motivate our work in the context of adaptive dynamics in a chemostat model.

  9. Stochastic population forecasts based on conditional expert opinions

    PubMed Central

    Billari, F C; Graziani, R; Melilli, E

    2012-01-01

    The paper develops and applies an expert-based stochastic population forecasting method, which can also be used to obtain a probabilistic version of scenario-based official forecasts. The full probability distribution of population forecasts is specified by starting from expert opinions on the future development of demographic components. Expert opinions are elicited as conditional on the realization of scenarios, in a two-step (or multiple-step) fashion. The method is applied to develop a stochastic forecast for the Italian population, starting from official scenarios from the Italian National Statistical Office. PMID:22879704

  10. An intensity-based stochastic model for terrestrial laser scanners

    NASA Astrophysics Data System (ADS)

    Wujanz, D.; Burger, M.; Mettenleiter, M.; Neitzel, F.

    2017-03-01

    Up until now no appropriate models have been proposed that are capable to describe the stochastic characteristics of reflectorless rangefinders - the key component of terrestrial laser scanners. This state has to be rated as unsatisfactory especially from the perception of Geodesy where comprehensive knowledge about the precision of measurements is of vital importance, for instance to weigh individual observations or to reveal outliers. In order to tackle this problem, a novel intensity-based stochastic model for the reflectorless rangefinder of a Zoller + Fröhlich Imager 5006 h is experimentally derived. This model accommodates the influence of the interaction between the emitted signal and object surface as well as the acquisition configuration on distance measurements. Based on two different experiments the stochastic model has been successfully verified for three chosen sampling rates.

  11. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  12. Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.

    NASA Astrophysics Data System (ADS)

    Carlen, Eric Anders

    In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.

  13. An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments

    NASA Astrophysics Data System (ADS)

    Kang, Yuncheol; Prabhu, Vittal

    The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.

  14. A comparison of deterministic and stochastic approaches for regional scale inverse modelling. Application to the Mar del Plata aquifer.

    NASA Astrophysics Data System (ADS)

    Pool, Maria; Carrera, Jesus; Alcolea, Andres

    2014-05-01

    Inversion of the spatial variability of transmissivity (T) in groundwater models can be handled using either stochastic or deterministic (i.e., geology-based zonation) approaches. While stochastic methods predominate in scientific literature, they have never been formally compared to deterministic approaches, preferred by practitioners, for large aquifer models. We use both approaches to model groundwater flow and solute transport in the Mar del Plata aquifer, where seawater intrusion is a major threat to freshwater resources. The relative performance of the two approaches is evaluated in terms of model fits to head and concentration data (available for nearly a century), plausibility of the estimated T fields and their ability to predict transport. We also address the impact of using T data from large scale (i.e., pumping test) and small scale (i.e., specific capacity) on the calibration of this regional coastal aquifer. We find that stochastic models, based upon conditional estimation and simulation techniques, identify some of the geological features (river deposit channels) and yield better fits to calibration data than the much simpler geology-based deterministic model. However, the latter demonstrates much greater robustness for predicting sea water intrusion and for incorporating concentrations as calibration data. We conclude that qualitative geological information is extremely rich in identifying variability patterns and should be explicitly included in the calibration of stochastic models.

  15. A stochastic modeling methodology based on weighted Wiener chaos and Malliavin calculus.

    PubMed

    Wan, Xiaoliang; Rozovskii, Boris; Karniadakis, George Em

    2009-08-25

    In many stochastic partial differential equations (SPDEs) involving random coefficients, modeling the randomness by spatial white noise may lead to ill-posed problems. Here we consider an elliptic problem with spatial Gaussian coefficients and present a methodology that resolves this issue. It is based on stochastic convolution implemented via generalized Malliavin operators in conjunction with weighted Wiener spaces that ensure the ellipticity condition. We present theoretical and numerical results that demonstrate the fast convergence of the method in the proper norm. Our approach is general and can be extended to other SPDEs and other types of multiplicative noise.

  16. Stochastic Coloured Petrinet Based Healthcare Infrastructure Interdependency Model

    NASA Astrophysics Data System (ADS)

    Nukavarapu, Nivedita; Durbha, Surya

    2016-06-01

    The Healthcare Critical Infrastructure (HCI) protects all sectors of the society from hazards such as terrorism, infectious disease outbreaks, and natural disasters. HCI plays a significant role in response and recovery across all other sectors in the event of a natural or manmade disaster. However, for its continuity of operations and service delivery HCI is dependent on other interdependent Critical Infrastructures (CI) such as Communications, Electric Supply, Emergency Services, Transportation Systems, and Water Supply System. During a mass casualty due to disasters such as floods, a major challenge that arises for the HCI is to respond to the crisis in a timely manner in an uncertain and variable environment. To address this issue the HCI should be disaster prepared, by fully understanding the complexities and interdependencies that exist in a hospital, emergency department or emergency response event. Modelling and simulation of a disaster scenario with these complexities would help in training and providing an opportunity for all the stakeholders to work together in a coordinated response to a disaster. The paper would present interdependencies related to HCI based on Stochastic Coloured Petri Nets (SCPN) modelling and simulation approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The entire model would be integrated with Geographic information based decision support system to visualize the dynamic behaviour of the interdependency of the Healthcare and related CI network in a geographically based environment.

  17. Mode-of-Action Uncertainty for Dual-Mode Carcinogens: A Bounding Approach for Naphthalene-Induced Nasal Tumors in Rats Based on PBPK and 2-Stage Stochastic Cancer Risk Models

    SciTech Connect

    Bogen, K T

    2007-05-11

    A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage

  18. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    PubMed Central

    2012-01-01

    Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs). As a logical model, probabilistic Boolean networks (PBNs) consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n) or O(nN2n) for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN). An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n), where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a network inferred from a T

  19. Bond-based linear indices of the non-stochastic and stochastic edge-adjacency matrix. 1. Theory and modeling of ChemPhys properties of organic molecules.

    PubMed

    Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo

    2010-11-01

    Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity

  20. Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach

    DOE PAGES

    Kim, Changho; Nonaka, Andy; Bell, John B.; ...

    2017-03-24

    Here, we develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules,more » to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a

  1. Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach

    NASA Astrophysics Data System (ADS)

    Kim, Changho; Nonaka, Andy; Bell, John B.; Garcia, Alejandro L.; Donev, Aleksandar

    2017-03-01

    We develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules, to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two

  2. Multi-objective reliability-based optimization with stochastic metamodels.

    PubMed

    Coelho, Rajan Filomeno; Bouillard, Philippe

    2011-01-01

    This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems--for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation--the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.

  3. A benders decomposition approach to multiarea stochastic distributed utility planning

    NASA Astrophysics Data System (ADS)

    McCusker, Susan Ann

    Until recently, small, modular generation and storage options---distributed resources (DRs)---have been installed principally in areas too remote for economic power grid connection and sensitive applications requiring backup capacity. Recent regulatory changes and DR advances, however, have lead utilities to reconsider the role of DRs. To a utility facing distribution capacity bottlenecks or uncertain load growth, DRs can be particularly valuable since they can be dispersed throughout the system and constructed relatively quickly. DR value is determined by comparing its costs to avoided central generation expenses (i.e., marginal costs) and distribution investments. This requires a comprehensive central and local planning and production model, since central system marginal costs result from system interactions over space and time. This dissertation develops and applies an iterative generalized Benders decomposition approach to coordinate models for optimal DR evaluation. Three coordinated models exchange investment, net power demand, and avoided cost information to minimize overall expansion costs. Local investment and production decisions are made by a local mixed integer linear program. Central system investment decisions are made by a LP, and production costs are estimated by a stochastic multi-area production costing model with Kirchhoff's Voltage and Current Law constraints. The nested decomposition is a new and unique method for distributed utility planning that partitions the variables twice to separate local and central investment and production variables, and provides upper and lower bounds on expected expansion costs. Kirchhoff's Voltage Law imposes nonlinear, nonconvex constraints that preclude use of LP if transmission capacity is available in a looped transmission system. This dissertation develops KVL constraint approximations that permit the nested decomposition to consider new transmission resources, while maintaining linearity in the three

  4. Stochastic approach to municipal solid waste landfill life based on the contaminant transit time modeling using the Monte Carlo (MC) simulation.

    PubMed

    Bieda, Bogusław

    2013-01-01

    The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design.

  5. Revisiting the Cape Cod Bacteria Injection Experiment Using a Stochastic Modeling Approach

    SciTech Connect

    Maxwell, R M; Welty, C; Harvey, R W

    2006-11-22

    Bromide and resting-cell bacteria tracer tests carried out in a sand and gravel aquifer at the USGS Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach and Lagrangian particle tracking numerical methods. Bacteria transport was strongly coupled to colloid filtration through functional dependence of local-scale colloid transport parameters on hydraulic conductivity and seepage velocity in a stochastic advection-dispersion/attachment-detachment model. Information on geostatistical characterization of the hydraulic conductivity (K) field from a nearby plot was utilized as input that was unavailable when the original analysis was carried out. A finite difference model for groundwater flow and a particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data using the aforementioned geostatistical parameters. An optimization routine was utilized to adjust the mean and variance of the lnK field over 100 realizations such that a best fit of a simulated, average bromide breakthrough curve is achieved. Once the optimal bromide fit was accomplished (based on adjusting the lnK statistical parameters in unconditional simulations), a stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of the mean bacteria breakthrough data were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech [1] equation for estimating single collector efficiency were compared to those using the Rajagopalan and Tien [2] model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions, with the Rajagopalan and Tien model yielding approximately a 30% lower peak concentration and less tailing than the Tufenkji and Elimelech formulation. Simulations using a distribution

  6. A stochastic XFEM model for the tensile strength prediction of heterogeneous graphite based on microstructural observations

    NASA Astrophysics Data System (ADS)

    Bansal, Manik; Singh, I. V.; Mishra, B. K.; Sharma, Kamal; Khan, I. A.

    2017-04-01

    A stochastic XFEM model based on microstructural observations has been developed to evaluate the tensile strength of NBG-18 nuclear graphite. The nuclear graphite consists of pitch matrix, filler particles, pores and micro-cracks. The numerical simulations are performed at two length scales due to large difference in average size of filler particles and pores. Both deterministic and stochastic approaches have been implemented. The study intends to illustrate the variation in tensile strength due to heterogeneities modeled stochastically. The properties of pitch matrix and filler particles are assumed to be known at the constituent level. The material models for both pitch and fillers are assumed to be linear elastic. The stochastic size and spatial distribution of the pores and filler particles has been modeled during the micro and macro analysis respectively. The strength of equivalent porous pitch matrix evaluated at micro level has been distributed stochastically in the elemental domain along with filler particles for macro analysis. The effect of micro-cracks has been incorporated indirectly by considering fracture plane in each filler particle. Tensile strength of nuclear graphite is obtained by performing the simulations at macro-level. Statistical parameters evaluated using numerical tensile strength data agree well with experimentally obtained statistical parameters available in the literature.

  7. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    SciTech Connect

    Heydari, M.H.; Hooshmandasl, M.R.; Maalek Ghaini, F.M.; Cattani, C.

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show the accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.

  8. Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach

    ERIC Educational Resources Information Center

    Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro

    2005-01-01

    Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…

  9. Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach

    ERIC Educational Resources Information Center

    Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro

    2005-01-01

    Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…

  10. A comparison of deterministic and stochastic approaches for regional scale inverse modeling on the Mar del Plata aquifer

    NASA Astrophysics Data System (ADS)

    Pool, M.; Carrera, J.; Alcolea, A.; Bocanegra, E. M.

    2015-12-01

    Inversion of the spatial variability of transmissivity (T) in groundwater models can be handled using either stochastic or deterministic (i.e., geology-based zonation) approaches. While stochastic methods predominate in scientific literature, they have never been formally compared to deterministic approaches, preferred by practitioners, for regional aquifer models. We use both approaches to model groundwater flow and solute transport in the Mar del Plata aquifer, where seawater intrusion is a major threat to freshwater resources. The relative performance of the two approaches is evaluated in terms of (i) model fits to head and concentration data (available for nearly a century), (ii) geological plausibility of the estimated T fields, and (iii) their ability to predict transport. We also address the impact of conditioning the estimated fields on T data coming from either pumping tests interpreted with the Theis method or specific capacity values from step-drawdown tests. We find that stochastic models, based upon conditional estimation and simulation techniques, identify some of the geological features (river deposit channels and low transmissivity regions associated to quartzite outcrops) and yield better fits to calibration data than the much simpler geology-based deterministic model, which cannot properly address model structure uncertainty. However, the latter demonstrates much greater robustness for predicting sea water intrusion and for incorporating concentrations as calibration data. We attribute the poor performance, and underestimated uncertainty, of the stochastic simulations to estimation bias introduced by model errors. Qualitative geological information is extremely rich in identifying large-scale variability patterns, which are identified by stochastic models only in data rich areas, and should be explicitly included in the calibration process.

  11. Mapping Rule-Based And Stochastic Constraints To Connection Architectures: Implication For Hierarchical Image Processing

    NASA Astrophysics Data System (ADS)

    Miller, Michael I.; Roysam, Badrinath; Smith, Kurt R.

    1988-10-01

    Essential to the solution of ill posed problems in vision and image processing is the need to use object constraints in the reconstruction. While Bayesian methods have shown the greatest promise, a fundamental difficulty has persisted in that many of the available constraints are in the form of deterministic rules rather than as probability distributions and are thus not readily incorporated as Bayesian priors. In this paper, we propose a general method for mapping a large class of rule-based constraints to their equivalent stochastic Gibbs' distribution representation. This mapping allows us to solve stochastic estimation problems over rule-generated constraint spaces within a Bayesian framework. As part of this approach we derive a method based on Langevin's stochastic differential equation and a regularization technique based on the classical autologistic transfer function that allows us to update every site simultaneously regardless of the neighbourhood structure. This allows us to implement a completely parallel method for generating the constraint sets corresponding to the regular grammar languages on massively parallel networks. We illustrate these ideas by formulating the image reconstruction problem based on a hierarchy of rule-based and stochastic constraints, and derive a fully parallelestimator structure. We also present results computed on the AMT DAP500 massively parallel digital computer, a mesh-connected 32x32 array of processing elements which are configured in a Single-Instruction, Multiple Data stream architecture.

  12. Stochastic identification of recharge, transmissivity, and storativity in aquifer transient flow: A quasi-steady approach

    NASA Astrophysics Data System (ADS)

    Dagan, Gedeon; Rubin, Yoram

    1988-10-01

    In this paper, a stochastic method to identify aquifer natural recharge, storativity, and transmissivity under transient conditions is developed. Four main assumptions were adopted: Y, the log transmissivity, is a normal random space function, the aquifer is unbounded, a first-order approximation of the flow equation is adopted, and the transients are slowly varying. Based on these assumptions, the expected value of Y and of the head H, as well as their covariances and crosscovariances, are expressed by analytical equations which depend on a parameters vector θ. A major part of the first paper is devoted to the development of these expressions, based on the two-dimensional flow equation. The proposed solution of the inverse problem is a double-stage procedure. First, θ is identified stochastically, by a maximum likelihood procedure applied to the measurements of Y and H. Then, θ serves to estimate the spatial distributions of Y and H through their conditional mean and variances of estimation. The three main new features of the approach are the possibility to identify the spatial distributions of Y and H through their first two statistical moments based on transient head data and in the presence of pumping-recharching wells; the identification of the storativity and the stochastic identification of natural recharge. Since the proposed method make use of the analytic solution of the flow equation, it saves the need of laborious numerical schemes. Application of the method to a section of the Israeli Coastal Aquifer illustrates its potential in a real-life case.

  13. Inversion of Robin coefficient by a spectral stochastic finite element approach

    SciTech Connect

    Jin Bangti Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  14. Implications for water use of a shift from annual to perennial crops - A stochastic modelling approach based on a trait meta-analysis

    NASA Astrophysics Data System (ADS)

    Vico, Giulia; Brunsell, Nathaniel

    2017-04-01

    The projected population growth and changes in climate and dietary habits will further increase the pressure on water resources globally. Within precision farming, a host of technical solutions has been developed to reduce water consumption for agricultural uses. The next frontier for a more sustainable agriculture is the combination of reduced water requirements with enhanced ecosystem services. Currently, staple grains are obtained from annuals crops. A shift from annual to perennial crops has been suggested as a way to enhance ecosystem services. In fact, perennial plants, with their continuous soil cover and the higher allocation of resources to the below ground, contribute to the reduction of soil erosion and nutrient losses, while enhancing carbon sequestration in the root zone. Nevertheless, the net effect of a shift to perennial crops on water use for agriculture is still unknown, despite its relevance for the sustainability of such a shift. We explore here the implications for water management at the field- to farm-scale of a shift from annual to perennial crops, under rainfed and irrigated agriculture. A probabilistic description of the soil water balance and crop development is employed to quantify water requirements and yields and their inter-annual variability, as a function of rainfall patterns, soil and crop features. Optimal irrigation strategies are thus defined in terms of maximization of yield and minimization of required irrigation volumes and their inter-annual variability. The probabilistic model is parameterized based on an extensive meta-analysis of traits of co-generic annual and perennial species to explore the consequences for water requirements of shifting from annual to perennial crops under current and future climates. We show that the larger and more developed roots of perennial crops may allow a better exploitation of soil water resources and a reduction of yield variability with respect to annual species. At the same time, perennial

  15. Exploring stochasticity and imprecise knowledge based on linear inequality constraints.

    PubMed

    Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf

    2016-09-01

    This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge.

  16. Stochastic Real-Time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    DTIC Science & Technology

    2011-09-01

    York, NY, 1992. [5] A.V. Savkin, P.N. Pathirana, nd F. Faruqi. The problem of precision missile guidance: LQR and H 00 control frameworks. IEEE...STOCHASTIC REAL-TIME OPTIMAL CONTROL : A PSEUDOSPECTRAL APPROACH FOR BEARING-ONLY TRAJECTORY OPTIMIZATION DISSERTATION Steven M. Ross, Lieutenant...the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENY/11-24 STOCHASTIC REAL-TIME OPTIMAL CONTROL : A

  17. An integrated fuzzy-stochastic modeling approach for risk assessment of groundwater contamination.

    PubMed

    Li, Jianbing; Huang, Gordon H; Zeng, Guangming; Maqsood, Imran; Huang, Yuefei

    2007-01-01

    An integrated fuzzy-stochastic risk assessment (IFSRA) approach was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with site conditions, environmental guidelines, and health impact criteria. The contaminant concentrations in groundwater predicted from a numerical model were associated with probabilistic uncertainties due to the randomness in modeling input parameters, while the consequences of contaminant concentrations violating relevant environmental quality guidelines and health evaluation criteria were linked with fuzzy uncertainties. The contaminant of interest in this study was xylene. The environmental quality guideline was divided into three different strictness categories: "loose", "medium" and "strict". The environmental-guideline-based risk (ER) and health risk (HR) due to xylene ingestion were systematically examined to obtain the general risk levels through a fuzzy rule base. The ER and HR risk levels were divided into five categories of "low", "low-to-medium", "medium", "medium-to-high" and "high", respectively. The general risk levels included six categories ranging from "low" to "very high". The fuzzy membership functions of the related fuzzy events and the fuzzy rule base were established based on a questionnaire survey. Thus the IFSRA integrated fuzzy logic, expert involvement, and stochastic simulation within a general framework. The robustness of the modeling processes was enhanced through the effective reflection of the two types of uncertainties as compared with the conventional risk assessment approaches. The developed IFSRA was applied to a petroleum-contaminated groundwater system in western Canada. Three scenarios with different environmental quality guidelines were analyzed, and reasonable results were obtained. The risk assessment approach developed in this study offers a unique tool for systematically quantifying various uncertainties in contaminated site management, and it also

  18. Partition-free approach to open quantum systems in harmonic environments: An exact stochastic Liouville equation

    NASA Astrophysics Data System (ADS)

    McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.

    2017-03-01

    We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.

  19. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    PubMed

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  20. Vibrational deactivation of a highly excited diatomic - a stochastic approach

    NASA Astrophysics Data System (ADS)

    Sceats, Mark G.

    1988-10-01

    A formula for the average energy transfer from a highly excited Morse oscillator is derived from linear coupling stochastic theory. The results are in reasonable agreement with the simulations of Nesbitt and Hynes for I 2 in He, Ar and Xe, and can be improved over the entire oscillator energy range by including the Kelley-Wolfsberg kinematic factor to account for non-linear coupling at low oscillator energies.

  1. Stochastic Approach for Modeling of DNAPL Migration in Heterogeneous Aquifers: Model Development and Experimental Data Generation

    NASA Astrophysics Data System (ADS)

    Dean, D. W.; Illangasekare, T. H.; Turner, A.; Russell, T. F.

    2004-12-01

    Modeling of the complex behavior of DNAPLs in naturally heterogeneous subsurface formations poses many challenges. Even though considerable progress have been made in developing improved numerical schemes to solve the governing partial differential equations, most of these methods still rely on deterministic description of the processes. This research explores the use of stochastic differential equations to model multiphase flow in heterogeneous aquifers, specifically the flow of DNAPLs in saturated soils. The models developed are evaluated using experimental data generated in two-dimensional test systems. A fundamental assumption used in the model formulation is that the movement of a fluid particle in each phase is described by a stochastic process and that the positions of all fluid particles over time are governed by a specific law. It is this law, which we seek to determine. The approach results in a nonlinear stochastic differential equation describing the position of the non-wetting phase fluid particle. The nonlinearity in the stochastic differential equation arises because both the drift and diffusion coefficients depend on the volumetric fraction of the phase, which in turn depends on the position of the fluid particles in the problem domain. The concept of a fluid particle is central to the development of the proposed model. Expressions for both saturation and volumetric fraction are developed using this concept of fluid particle. Darcy's law and the continuity equation are used to derive a Fokker-Planck equation governing flow. The Ito calculus is then applied to derive a stochastic differential equation(SDE) for the non-wetting phase. This SDE has both drift and diffusion terms which depend on the volumetric fraction of the non-wetting phase. Standard stochastic theories based on the Ito calculus and the Wiener process and the equivalent Fokker-Planck PDE's are typically used to model diffusion processes. However, these models, in their usual form

  2. An offline approach for output-only Bayesian identification of stochastic nonlinear systems using unscented Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erazo, Kalil; Nagarajaiah, Satish

    2017-06-01

    In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.

  3. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    NASA Astrophysics Data System (ADS)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  4. Fisher waves: An individual-based stochastic model

    NASA Astrophysics Data System (ADS)

    Houchmandzadeh, B.; Vallade, M.

    2017-07-01

    The propagation of a beneficial mutation in a spatially extended population is usually studied using the phenomenological stochastic Fisher-Kolmogorov-Petrovsky-Piscounov (SFKPP) equation. We derive here an individual-based, stochastic model founded on the spatial Moran process where fluctuations are treated exactly. The mean-field approximation of this model leads to an equation that is different from the phenomenological FKPP equation. At small selection pressure, the front behavior can be mapped into a Brownian motion with drift, the properties of which can be derived from the microscopic parameters of the Moran model. Finally, we generalize the model to take into account dispersal kernels beyond migration to nearest neighbors. We show how the effective population size (which controls the noise amplitude) and the diffusion coefficient can both be computed from the dispersal kernel.

  5. Toward Stochastic Parameterization Based on Profiler Measurements of Vertical Velocity

    NASA Astrophysics Data System (ADS)

    Penland, C.; Koepke, A.; Williams, C. R.

    2016-12-01

    Parameterizations in General Circulation Models (GCMs) that account for uncertainty due to both unresolved, sub-grid scale processes and errors in assumptions made in the formulation of the parameterization itself are needed to represent the full probability distribution function of resolved processes in the model. In this study, we develop a probabilistic description of vertical velocity based on profiler data collected at Darwin during the time period November 2005 to February 2006. Data collected at one-minute resolution are analyzed at the one-minute, ten-minute and hourly timescales, including fits to the Stochastically-Generated Skew (SGS) distributions. The SGS distributions are associated with linear dynamics, including correlated additive and multiplicative noise. As expected, we find that the stochastic approximation to nonlinear dynamics becomes more appropriate as the timescale is increased by coarse-graining.

  6. Intervention-Based Stochastic Disease Eradication

    NASA Astrophysics Data System (ADS)

    Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira

    2013-03-01

    Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204

  7. Signal denoising using stochastic resonance and bistable circuit for acoustic emission-based structural health monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Jinki; Harne, Ryan L.; Wang, K. W.

    2017-04-01

    Noise is unavoidable and ever-present in measurements. As a result, signal denoising is a necessity for many scientific and engineering disciplines. In particular, structural health monitoring applications aim to detect often weak anomaly responses generated by incipient damage (such as acoustic emission signals) from background noise that contaminates the signals. Among various approaches, stochastic resonance has been widely studied and adopted for denoising and weak signal detection to enhance the reliability of structural heath monitoring. On the other hand, many of the advancements have been focused on detecting useful information from the frequency domain generally in a postprocessing environment, such as identifying damage-induced frequency changes that become more prominent by utilizing stochastic resonance in bistable systems, rather than recovering the original time domain responses. In this study, a new adaptive signal conditioning strategy is presented for on-line signal denoising and recovery, via utilizing the stochastic resonance in a bistable circuit sensor. The input amplitude to the bistable system is adaptively adjusted to favorably activate the stochastic resonance based on the noise level of the given signal, which is one of the few quantities that can be readily assessed from noise contaminated signals in practical situations. Numerical investigations conducted by employing a theoretical model of a double-well Duffing analog circuit demonstrate the operational principle and confirm the denoising performance of the new method. This study exemplifies the promising potential of implementing the new denoising strategy for enhancing on-line acoustic emission-based structural health monitoring.

  8. A stochastic approach to the hadron spectrum. III

    SciTech Connect

    Aron, J.C.

    1986-12-01

    The connection with the quarks of the stochastic model proposed in the two preceding papers is studied; the slopes of the baryon trajectories are calculated with reference to the quarks. Suggestions are made for the interpretation of the model (quadratic or linear addition of the contributions to the mass, dependence of the decay on the quantum numbers of the hadrons involved, etc.) and concerning its link with the quarkonium model, which describes the mesons with charm or beauty. The controversial question of the ''subquantum level'' is examined.

  9. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  10. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  11. Intervention-Based Stochastic Disease Eradication

    PubMed Central

    Billings, Lora; Mier-y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira B.

    2013-01-01

    Disease control is of paramount importance in public health, with infectious disease extinction as the ultimate goal. Although diseases may go extinct due to random loss of effective contacts where the infection is transmitted to new susceptible individuals, the time to extinction in the absence of control may be prohibitively long. Intervention controls are typically defined on a deterministic schedule. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and rate of infection spread. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. PMID:23940548

  12. Modified stochastic variational approach to non-Hermitian quantum systems

    NASA Astrophysics Data System (ADS)

    Kraft, Daniel; Plessas, Willibald

    2016-08-01

    The stochastic variational method has proven to be a very efficient and accurate tool to calculate especially bound states of quantum-mechanical few-body systems. It relies on the Rayleigh-Ritz variational principle for minimizing real eigenenergies of Hermitian Hamiltonians. From molecular to atomic, nuclear, and particle physics there is actually a great demand of describing also resonant states to a high degree of reliance. This is especially true with regard to hadron resonances, which have to be treated in a relativistic framework. So far standard methods of dealing with quantum chromodynamics have not yet succeeded in describing hadron resonances in a realistic manner. Resonant states can be handled by non-Hermitian quantum Hamiltonians. These states correspond to poles in the lower half of the unphysical sheet of the complex energy plane and are therefore intimately connected with complex eigenvalues. Consequently the Rayleigh-Ritz variational principle cannot be employed in the usual manner. We have studied alternative selection principles for the choice of test functions to treat resonances along the stochastic variational method. We have found that a stationarity principle for the complex energy eigenvalues provides a viable method for selecting test functions for resonant states in a constructive manner. We discuss several variants thereof and exemplify their practical efficiencies.

  13. Integral-based event triggering controller design for stochastic LTI systems via convex optimisation

    NASA Astrophysics Data System (ADS)

    Mousavi, S. H.; Marquez, H. J.

    2016-07-01

    The presence of measurement noise in the event-based systems can lower system efficiency both in terms of data exchange rate and performance. In this paper, an integral-based event triggering control system is proposed for LTI systems with stochastic measurement noise. We show that the new mechanism is robust against noise and effectively reduces the flow of communication between plant and controller, and also improves output performance. Using a Lyapunov approach, stability in the mean square sense is proved. A simulated example illustrates the properties of our approach.

  14. Stochastic modeling and generation of random fields of elasticity tensors: A unified information-theoretic approach

    NASA Astrophysics Data System (ADS)

    Staber, Brian; Guilleminot, Johann

    2017-06-01

    In this Note, we present a unified approach to the information-theoretic modeling and simulation of a class of elasticity random fields, for all physical symmetry classes. The new stochastic representation builds upon a Walpole tensor decomposition, which allows the maximum entropy constraints to be decoupled in accordance with the tensor (sub)algebras associated with the class under consideration. In contrast to previous works where the construction was carried out on the scalar-valued Walpole coordinates, the proposed strategy involves both matrix-valued and scalar-valued random fields. This enables, in particular, the construction of a generation algorithm based on a memoryless transformation, hence improving the computational efficiency of the framework. Two applications involving weak symmetries and sampling over spherical and cylindrical geometries are subsequently provided. These numerical experiments are relevant to the modeling of elastic interphases in nanocomposites, as well as to the simulation of spatially dependent wood properties for instance.

  15. Nuclear quadrupole resonance lineshape analysis for different motional models: stochastic Liouville approach.

    PubMed

    Kruk, D; Earle, K A; Mielczarek, A; Kubica, A; Milewska, A; Moscicki, J

    2011-12-14

    A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000)] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed. © 2011 American Institute of Physics

  16. A stochastic control approach to Slotted-ALOHA random access protocol

    NASA Astrophysics Data System (ADS)

    Pietrabissa, Antonio

    2013-12-01

    ALOHA random access protocols are distributed protocols based on transmission probabilities, that is, each node decides upon packet transmissions according to a transmission probability value. In the literature, ALOHA protocols are analysed by giving necessary and sufficient conditions for the stability of the queues of the node buffers under a control vector (whose elements are the transmission probabilities assigned to the nodes), given an arrival rate vector (whose elements represent the rates of the packets arriving in the node buffers). The innovation of this work is that, given an arrival rate vector, it computes the optimal control vector by defining and solving a stochastic control problem aimed at maximising the overall transmission efficiency, while keeping a grade of fairness among the nodes. Furthermore, a more general case in which the arrival rate vector changes in time is considered. The increased efficiency of the proposed solution with respect to the standard ALOHA approach is evaluated by means of numerical simulations.

  17. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  18. Learning stochastic process-based models of dynamical systems from knowledge and data.

    PubMed

    Tanevski, Jovan; Todorovski, Ljupčo; Džeroski, Sašo

    2016-03-22

    Identifying a proper model structure, using methods that address both structural and parameter uncertainty, is a crucial problem within the systems approach to biology. And yet, it has a marginal presence in the recent literature. While many existing approaches integrate methods for simulation and parameter estimation of a single model to address parameter uncertainty, only few of them address structural uncertainty at the same time. The methods for handling structure uncertainty often oversimplify the problem by allowing the human modeler to explicitly enumerate a relatively small number of alternative model structures. On the other hand, process-based modeling methods provide flexible modular formalisms for specifying large classes of plausible model structures, but their scope is limited to deterministic models. Here, we aim at extending the scope of process-based modeling methods to inductively learn stochastic models from knowledge and data. We combine the flexibility of process-based modeling in terms of addressing structural uncertainty with the benefits of stochastic modeling. The proposed method combines search trough the space of plausible model structures, the parsimony principle and parameter estimation to identify a model with optimal structure and parameters. We illustrate the utility of the proposed method on four stochastic modeling tasks in two domains: gene regulatory networks and epidemiology. Within the first domain, using synthetically generated data, the method successfully recovers the structure and parameters of known regulatory networks from simulations. In the epidemiology domain, the method successfully reconstructs previously established models of epidemic outbreaks from real, sparse and noisy measurement data. The method represents a unified approach to modeling dynamical systems that allows for flexible formalization of the space of candidate model structures, deterministic and stochastic interpretation of model dynamics, and automated

  19. Prediction of Building Floorplans Using Logical and Stochastic Reasoning Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Loch-Dehbi, S.; Dehbi, Y.; Gröger, G.; Plümer, L.

    2016-10-01

    This paper introduces a novel method for the automatic derivation of building floorplans and indoor models. Our approach is based on a logical and stochastic reasoning using sparse observations such as building room areas. No further sensor observations like 3D point clouds are needed. Our method benefits from an extensive prior knowledge of functional dependencies and probability density functions of shape and location parameters of rooms depending on their functional use. The determination of posterior beliefs is performed using Bayesian Networks. Stochastic reasoning is complex since the problem is characterized by a mixture of discrete and continuous parameters that are in turn correlated by non-linear constraints. To cope with this kind of complexity, the proposed reasoner combines statistical methods with constraint propagation. It generates a limited number of hypotheses in a model-based top-down approach. It predicts floorplans based on a-priori localised windows. The use of Gaussian mixture models, constraint solvers and stochastic models helps to cope with the a-priori infinite space of the possible floorplan instantiations.

  20. Stochastic queueing-theory approach to human dynamics

    NASA Astrophysics Data System (ADS)

    Walraevens, Joris; Demoor, Thomas; Maertens, Tom; Bruneel, Herwig

    2012-02-01

    Recently, numerous studies have shown that human dynamics cannot be described accurately by exponential laws. For instance, Barabási [Nature (London)NATUAS0028-083610.1038/nature03459 435, 207 (2005)] demonstrates that waiting times of tasks to be performed by a human are more suitably modeled by power laws. He presumes that these power laws are caused by a priority selection mechanism among the tasks. Priority models are well-developed in queueing theory (e.g., for telecommunication applications), and this paper demonstrates the (quasi-)immediate applicability of such a stochastic priority model to human dynamics. By calculating generating functions and by studying them in their dominant singularity, we prove that nonexponential tails result naturally. Contrary to popular belief, however, these are not necessarily triggered by the priority selection mechanism.

  1. Stochastic Modeling Approach to the Incubation Time of Prionic Diseases

    NASA Astrophysics Data System (ADS)

    Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.

    2003-05-01

    Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.

  2. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    NASA Astrophysics Data System (ADS)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  3. Application of Stochastic and Deterministic Approaches to Modeling Interstellar Chemistry

    NASA Astrophysics Data System (ADS)

    Pei, Yezhe

    This work is about simulations of interstellar chemistry using the deterministic rate equation (RE) method and the stochastic moment equation (ME) method. Primordial metal-poor interstellar medium (ISM) is of our interest and the socalled “Population-II” stars could have been formed in this environment during the “Epoch of Reionization” in the baby universe. We build a gas phase model using the RE scheme to describe the ionization-powered interstellar chemistry. We demonstrate that OH replaces CO as the most abundant metal-bearing molecule in such interstellar clouds of the early universe. Grain surface reactions play an important role in the studies of astrochemistry. But the lack of an accurate yet effective simulation method still presents a challenge, especially for large, practical gas-grain system. We develop a hybrid scheme of moment equations and rate equations (HMR) for large gas-grain network to model astrochemical reactions in the interstellar clouds. Specifically, we have used a large chemical gas-grain model, with stochastic moment equations to treat the surface chemistry and deterministic rate equations to treat the gas phase chemistry, to simulate astrochemical systems as of the ISM in the Milky Way, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). We compare the results to those of pure rate equations and modified rate equations and present a discussion about how moment equations improve our theoretical modeling and how the abundances of the assorted species are changed by varied metallicity. We also model the observed composition of H2O, CO and CO2 ices toward Young Stellar Objects in the LMC and show that the HMR method gives a better match to the observation than the pure RE method.

  4. Two-state approach to stochastic hair bundle dynamics

    NASA Astrophysics Data System (ADS)

    Clausznitzer, Diana; Lindner, Benjamin; Jülicher, Frank; Martin, Pascal

    2008-04-01

    Hair cells perform the mechanoelectrical transduction of sound signals in the auditory and vestibular systems of vertebrates. The part of the hair cell essential for this transduction is the so-called hair bundle. In vitro experiments on hair cells from the sacculus of the American bullfrog have shown that the hair bundle comprises active elements capable of producing periodic deflections like a relaxation oscillator. Recently, a continuous nonlinear stochastic model of the hair bundle motion [Nadrowski , Proc. Natl. Acad. Sci. U.S.A. 101, 12195 (2004)] has been shown to reproduce the experimental data in stochastic simulations faithfully. Here, we demonstrate that a binary filtering of the hair bundle's deflection (experimental data and continuous hair bundle model) does not change significantly the spectral statistics of the spontaneous as well as the periodically driven hair bundle motion. We map the continuous hair bundle model to the FitzHugh-Nagumo model of neural excitability and discuss the bifurcations between different regimes of the system in terms of the latter model. Linearizing the nullclines and assuming perfect time-scale separation between the variables we can map the FitzHugh-Nagumo system to a simple two-state model in which each of the states corresponds to the two possible values of the binary-filtered hair bundle trajectory. For the two-state model, analytical expressions for the power spectrum and the susceptibility can be calculated [Lindner and Schimansky-Geier, Phys. Rev. E 61, 6103 (2000)] and show the same features as seen in the experimental data as well as in simulations of the continuous hair bundle model.

  5. Distinguishing chaotic and stochastic dynamics from time series by using a multiscale symbolic approach.

    PubMed

    Zunino, L; Soriano, M C; Rosso, O A

    2012-10-01

    In this paper we introduce a multiscale symbolic information-theory approach for discriminating nonlinear deterministic and stochastic dynamics from time series associated with complex systems. More precisely, we show that the multiscale complexity-entropy causality plane is a useful representation space to identify the range of scales at which deterministic or noisy behaviors dominate the system's dynamics. Numerical simulations obtained from the well-known and widely used Mackey-Glass oscillator operating in a high-dimensional chaotic regime were used as test beds. The effect of an increased amount of observational white noise was carefully examined. The results obtained were contrasted with those derived from correlated stochastic processes and continuous stochastic limit cycles. Finally, several experimental and natural time series were analyzed in order to show the applicability of this scale-dependent symbolic approach in practical situations.

  6. Network capacity with probit-based stochastic user equilibrium problem

    PubMed Central

    Lu, Lili; Wang, Jian; Zheng, Pengjun; Wang, Wei

    2017-01-01

    Among different stochastic user equilibrium (SUE) traffic assignment models, the Logit-based stochastic user equilibrium (SUE) is extensively investigated by researchers. It is constantly formulated as the low-level problem to describe the drivers’ route choice behavior in bi-level problems such as network design, toll optimization et al. The Probit-based SUE model receives far less attention compared with Logit-based model albeit the assignment result is more consistent with drivers’ behavior. It is well-known that due to the identical and irrelevant alternative (IIA) assumption, the Logit-based SUE model is incapable to deal with route overlapping problem and cannot account for perception variance with respect to trips. This paper aims to explore the network capacity with Probit-based traffic assignment model and investigate the differences of it is with Logit-based SUE traffic assignment models. The network capacity is formulated as a bi-level programming where the up-level program is to maximize the network capacity through optimizing input parameters (O-D multiplies and signal splits) while the low-level program is the Logit-based or Probit-based SUE problem formulated to model the drivers’ route choice. A heuristic algorithm based on sensitivity analysis of SUE problem is detailed presented to solve the proposed bi-level program. Three numerical example networks are used to discuss the differences of network capacity between Logit-based SUE constraint and Probit-based SUE constraint. This study finds that while the network capacity show different results between Probit-based SUE and Logit-based SUE constraints, the variation pattern of network capacity with respect to increased level of travelers’ information for general network under the two type of SUE problems is the same, and with certain level of travelers’ information, both of them can achieve the same maximum network capacity. PMID:28178284

  7. Stochastic Model-Based Control of Multi-Robot Systems

    DTIC Science & Technology

    2009-06-30

    dual [6]. For example, we use the optimal control theory to derive linear quadratic regulator ( LQR ), and in the same theoretical framework we can derive...a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...Final Technical Report 23-09-2008 - 22-06-2009 Stochastic Model-Based Control of Multi-Robot Systems W911NF-08-1-0503 Dejan Milutinovic and Devendra P

  8. Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays

    SciTech Connect

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2014-10-07

    We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.

  9. Broadband seismic monitoring of active volcanoes using deterministic and stochastic approaches

    NASA Astrophysics Data System (ADS)

    Kumagai, H.; Nakano, M.; Maeda, T.; Yepes, H.; Palacios, P.; Ruiz, M. C.; Arrais, S.; Vaca, M.; Molina, I.; Yamashina, T.

    2009-12-01

    We systematically used two approaches to analyze broadband seismic signals observed at active volcanoes: one is waveform inversion of very-long-period (VLP) signals in the frequency domain assuming possible source mechanisms; the other is a source location method of long-period (LP) and tremor using their amplitudes. The deterministic approach of the waveform inversion is useful to constrain the source mechanism and location, but is basically only applicable to VLP signals with periods longer than a few seconds. The source location method uses seismic amplitudes corrected for site amplifications and assumes isotropic radiation of S waves. This assumption of isotropic radiation is apparently inconsistent with the hypothesis of crack geometry at the LP source. Using the source location method, we estimated the best-fit source location of a VLP/LP event at Cotopaxi using a frequency band of 7-12 Hz and Q = 60. This location was close to the best-fit source location determined by waveform inversion of the VLP/LP event using a VLP band of 5-12.5 s. The waveform inversion indicated that a crack mechanism better explained the VLP signals than an isotropic mechanism. These results indicated that isotropic radiation is not inherent to the source and only appears at high frequencies. We also obtained a best-fit location of an explosion event at Tungurahua when using a frequency band of 5-10 Hz and Q = 60. This frequency band and Q value also yielded reasonable locations for the sources of tremor signals associated with lahars and pyroclastic flows at Tungurahua. The isotropic radiation assumption may be valid in a high frequency range in which the path effect caused by the scattering of seismic waves results in an isotropic radiation pattern of S waves. The source location method may be categorized as a stochastic approach based on the nature of scattering waves. We further applied the waveform inversion to VLP signals observed at only two stations during a volcanic crisis

  10. Stochastic multiscale modelling of cortical bone elasticity based on high-resolution imaging.

    PubMed

    Sansalone, Vittorio; Gagliardi, Davide; Desceliers, Christophe; Bousson, Valérie; Laredo, Jean-Denis; Peyrin, Françoise; Haïat, Guillaume; Naili, Salah

    2016-02-01

    Accurate and reliable assessment of bone quality requires predictive methods which could probe bone microstructure and provide information on bone mechanical properties. Multiscale modelling and simulation represent a fast and powerful way to predict bone mechanical properties based on experimental information on bone microstructure as obtained through X-ray-based methods. However, technical limitations of experimental devices used to inspect bone microstructure may produce blurry data, especially in in vivo conditions. Uncertainties affecting the experimental data (input) may question the reliability of the results predicted by the model (output). Since input data are uncertain, deterministic approaches are limited and new modelling paradigms are required. In this paper, a novel stochastic multiscale model is developed to estimate the elastic properties of bone while taking into account uncertainties on bone composition. Effective elastic properties of cortical bone tissue were computed using a multiscale model based on continuum micromechanics. Volume fractions of bone components (collagen, mineral, and water) were considered as random variables whose probabilistic description was built using the maximum entropy principle. The relevance of this approach was proved by analysing a human bone sample taken from the inferior femoral neck. The sample was imaged using synchrotron radiation micro-computed tomography. 3-D distributions of Haversian porosity and tissue mineral density extracted from these images supplied the experimental information needed to build the stochastic models of the volume fractions. Thus, the stochastic multiscale model provided reliable statistical information (such as mean values and confidence intervals) on bone elastic properties at the tissue scale. Moreover, the existence of a simpler "nominal model", accounting for the main features of the stochastic model, was investigated. It was shown that such a model does exist, and its relevance

  11. A stochastic optimization approach for integrated urban water resource planning.

    PubMed

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change.

  12. A stochastic process approach of the drake equation parameters

    NASA Astrophysics Data System (ADS)

    Glade, Nicolas; Ballet, Pascal; Bastien, Olivier

    2012-04-01

    The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.

  13. Parameter-induced stochastic resonance based on spectral entropy and its application to weak signal detection

    SciTech Connect

    Zhang, Jinjing; Zhang, Tao

    2015-02-15

    The parameter-induced stochastic resonance based on spectral entropy (PSRSE) method is introduced for the detection of a very weak signal in the presence of strong noise. The effect of stochastic resonance on the detection is optimized using parameters obtained in spectral entropy analysis. Upon processing employing the PSRSE method, the amplitude of the weak signal is enhanced and the noise power is reduced, so that the frequency of the signal can be estimated with greater precision through spectral analysis. While the improvement in the signal-to-noise ratio is similar to that obtained using the Duffing oscillator algorithm, the computational cost reduces from O(N{sup 2}) to O(N). The PSRSE approach is applied to the frequency measurement of a weak signal made by a vortex flow meter. The results are compared with those obtained applying the Duffing oscillator algorithm.

  14. Anticipating the Chaotic Behaviour of Industrial Systems Based on Stochastic, Event-Driven Simulations

    NASA Astrophysics Data System (ADS)

    Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra

    2004-08-01

    In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.

  15. Wildfire susceptibility mapping: comparing deterministic and stochastic approaches

    NASA Astrophysics Data System (ADS)

    Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj

    2016-04-01

    Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.

  16. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  17. Stochastic analysis of bounded unsaturated flow in heterogeneous aquifers: Spectral/perturbation approach

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2009-01-01

    This paper describes a stochastic analysis of steady state flow in a bounded, partially saturated heterogeneous porous medium subject to distributed infiltration. The presence of boundary conditions leads to non-uniformity in the mean unsaturated flow, which in turn causes non-stationarity in the statistics of velocity fields. Motivated by this, our aim is to investigate the impact of boundary conditions on the behavior of field-scale unsaturated flow. Within the framework of spectral theory based on Fourier-Stieltjes representations for the perturbed quantities, the general expressions for the pressure head variance, variance of log unsaturated hydraulic conductivity and variance of the specific discharge are presented in the wave number domain. Closed-form expressions are developed for the simplified case of statistical isotropy of the log hydraulic conductivity field with a constant soil pore-size distribution parameter. These expressions allow us to investigate the impact of the boundary conditions, namely the vertical infiltration from the soil surface and a prescribed pressure head at a certain depth below the soil surface. It is found that the boundary conditions are critical in predicting uncertainty in bounded unsaturated flow. Our analytical expression for the pressure head variance in a one-dimensional, heterogeneous flow domain, developed using a nonstationary spectral representation approach [Li S-G, McLaughlin D. A nonstationary spectral method for solving stochastic groundwater problems: unconditional analysis. Water Resour Res 1991;27(7):1589-605; Li S-G, McLaughlin D. Using the nonstationary spectral method to analyze flow through heterogeneous trending media. Water Resour Res 1995; 31(3):541-51], is precisely equivalent to the published result of Lu et al. [Lu Z, Zhang D. Analytical solutions to steady state unsaturated flow in layered, randomly heterogeneous soils via Kirchhoff transformation. Adv Water Resour 2004;27:775-84].

  18. Image-based histologic grade estimation using stochastic geometry analysis

    NASA Astrophysics Data System (ADS)

    Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.

    2011-03-01

    Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.

  19. Linking agent-based models and stochastic models of financial markets

    PubMed Central

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene

    2012-01-01

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086

  20. Linking agent-based models and stochastic models of financial markets.

    PubMed

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.

  1. Stochastic structural and reliability based optimization of tuned mass damper

    NASA Astrophysics Data System (ADS)

    Mrabet, E.; Guedri, M.; Ichchou, M. N.; Ghanmi, S.

    2015-08-01

    The purpose of the current work is to present and discuss a technique for optimizing the parameters of a vibration absorber in the presence of uncertain bounded structural parameters. The technique used in the optimization is an interval extension based on a Taylor expansion of the objective function. The technique permits the transformation of the problem, initially non-deterministic, into two independents deterministic sub-problems. Two optimization strategies are considered: the Stochastic Structural Optimization (SSO) and the Reliability Based Optimization (RBO). It has been demonstrated through two different structures that the technique is valid for the SSO problem, even for high levels of uncertainties and it is less suitable for the RBO problem, especially when considering high levels of uncertainties.

  2. HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Brownston, Lee

    2012-01-01

    Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The

  3. An LMI approach to discrete-time observer design with stochastic resilience

    NASA Astrophysics Data System (ADS)

    Yaz, Edwin Engin; Jeong, Chung Seop; Yaz, Yvonne Ilke

    2006-04-01

    Much of the recent work on robust control or observer design has focused on preservation of stability of the controlled system or the convergence of the observer in the presence of parameter perturbations in the plant or the measurement model. The present work addresses the important problem of stochastic resilience or non-fragility of a discrete-time Luenberger observer which is the maintenance of convergence and/or performance when the observer is erroneously implemented possibly due to computational errors i.e. round off errors in digital implementation or sensor errors, etc. A common linear matrix inequality framework is presented to address the stochastic resilient design problem for various performance criteria in the implementation based on the knowledge of an upper bound on the variance of the random error in the observer gain. Present results are compared to earlier designs for stochastic robustness. Illustrative examples are given to complement the theoretical results.

  4. A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments

    PubMed Central

    Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro

    2009-01-01

    Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided. PMID:22412334

  5. Bed Capacity Planning Using Stochastic Simulation Approach in Cardiac-surgery Department of Teaching Hospitals, Tehran, Iran

    PubMed Central

    TORABIPOUR, Amin; ZERAATI, Hojjat; ARAB, Mohammad; RASHIDIAN, Arash; AKBARI SARI, Ali; SARZAIEM, Mahmuod Reza

    2016-01-01

    Background: To determine the hospital required beds using stochastic simulation approach in cardiac surgery departments. Methods: This study was performed from Mar 2011 to Jul 2012 in three phases: First, collection data from 649 patients in cardiac surgery departments of two large teaching hospitals (in Tehran, Iran). Second, statistical analysis and formulate a multivariate linier regression model to determine factors that affect patient's length of stay. Third, develop a stochastic simulation system (from admission to discharge) based on key parameters to estimate required bed capacity. Results: Current cardiac surgery department with 33 beds can only admit patients in 90.7% of days. (4535 d) and will be required to over the 33 beds only in 9.3% of days (efficient cut off point). According to simulation method, studied cardiac surgery department will requires 41–52 beds for admission of all patients in the 12 next years. Finally, one-day reduction of length of stay lead to decrease need for two hospital beds annually. Conclusion: Variation of length of stay and its affecting factors can affect required beds. Statistic and stochastic simulation model are applied and useful methods to estimate and manage hospital beds based on key hospital parameters. PMID:27957466

  6. Bed Capacity Planning Using Stochastic Simulation Approach in Cardiac-surgery Department of Teaching Hospitals, Tehran, Iran.

    PubMed

    Torabipour, Amin; Zeraati, Hojjat; Arab, Mohammad; Rashidian, Arash; Akbari Sari, Ali; Sarzaiem, Mahmuod Reza

    2016-09-01

    To determine the hospital required beds using stochastic simulation approach in cardiac surgery departments. This study was performed from Mar 2011 to Jul 2012 in three phases: First, collection data from 649 patients in cardiac surgery departments of two large teaching hospitals (in Tehran, Iran). Second, statistical analysis and formulate a multivariate linier regression model to determine factors that affect patient's length of stay. Third, develop a stochastic simulation system (from admission to discharge) based on key parameters to estimate required bed capacity. Current cardiac surgery department with 33 beds can only admit patients in 90.7% of days. (4535 d) and will be required to over the 33 beds only in 9.3% of days (efficient cut off point). According to simulation method, studied cardiac surgery department will requires 41-52 beds for admission of all patients in the 12 next years. Finally, one-day reduction of length of stay lead to decrease need for two hospital beds annually. Variation of length of stay and its affecting factors can affect required beds. Statistic and stochastic simulation model are applied and useful methods to estimate and manage hospital beds based on key hospital parameters.

  7. Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model

    NASA Astrophysics Data System (ADS)

    Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon

    Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.

  8. The sequence relay selection strategy based on stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Zhu, Rui; Chen, Xihao; Huang, Yangchao

    2017-07-01

    Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.

  9. On the Performance of Stochastic Model-Based Image Segmentation

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Sewchand, Wilfred

    1989-11-01

    A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].

  10. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  11. A Stochastic Approach For Extending The Dimensionality Of Observed Datasets

    NASA Technical Reports Server (NTRS)

    Varnai, Tamas

    2002-01-01

    This paper addresses the problem that in many cases, observations cannot provide complete fields of the measured quantities, because they yield data only along a single cross-section through the examined fields. The paper describes a new Fourier-adjustment technique that allows existing fractal models to build realistic surroundings to the measured cross-sections. This new approach allows more representative calculations of cloud radiative processes and may be used in other areas as well.

  12. Advanced Computational Approaches for Characterizing Stochastic Cellular Responses to Low Dose, Low Dose Rate Exposures

    SciTech Connect

    Scott, Bobby, R., Ph.D.

    2003-06-27

    OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on

  13. A wavelet-based computational method for solving stochastic Itô–Volterra integral equations

    SciTech Connect

    Mohammadi, Fakhrodin

    2015-10-01

    This paper presents a computational method based on the Chebyshev wavelets for solving stochastic Itô–Volterra integral equations. First, a stochastic operational matrix for the Chebyshev wavelets is presented and a general procedure for forming this matrix is given. Then, the Chebyshev wavelets basis along with this stochastic operational matrix are applied for solving stochastic Itô–Volterra integral equations. Convergence and error analysis of the Chebyshev wavelets basis are investigated. To reveal the accuracy and efficiency of the proposed method some numerical examples are included.

  14. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    PubMed

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  15. Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.

    PubMed

    Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A

    2007-12-01

    By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.

  16. A stochastic damage model for the rupture prediction of a multi-phase solid. I - Parametric studies. II - Statistical approach

    NASA Technical Reports Server (NTRS)

    Lua, Yuan J.; Liu, Wing K.; Belytschko, Ted

    1992-01-01

    A stochastic damage model for predicting the rupture of a brittle multiphase material is developed, based on the microcrack-macrocrack interaction. The model, which incorporates uncertainties in locations, orientations, and numbers of microcracks, characterizes damage by microcracking and fracture by macrocracking. A parametric study is carried out to investigate the change of the stress intensity at the macrocrack tip by the configuration of microcracks. The inherent statistical distribution of the fracture toughness arising from the intrinsic random nature of microcracks is explored using a statistical approach. For this purpose, a computer simulation model is introduced, which incorporates a statistical characterization of geometrical parameters of a random microcrack array.

  17. Stochastic Approach to Phonon-Assisted Optical Absorption

    NASA Astrophysics Data System (ADS)

    Zacharias, Marios; Patrick, Christopher E.; Giustino, Feliciano

    2015-10-01

    We develop a first-principles theory of phonon-assisted optical absorption in semiconductors and insulators which incorporates the temperature dependence of the electronic structure. We show that the Hall-Bardeen-Blatt theory of indirect optical absorption and the Allen-Heine theory of temperature-dependent band structures can be derived from the present formalism by retaining only one-phonon processes. We demonstrate this method by calculating the optical absorption coefficient of silicon using an importance sampling Monte Carlo scheme, and we obtain temperature-dependent line shapes and band gaps in good agreement with experiment. The present approach opens the way to predictive calculations of the optical properties of solids at finite temperature.

  18. Deterministic and stochastic approaches in the clinical application of mesenchymal stromal cells (MSCs)

    PubMed Central

    Pacini, Simone

    2014-01-01

    Mesenchymal stromal cells (MSCs) have enormous intrinsic clinical value due to their multi-lineage differentiation capacity, support of hemopoiesis, immunoregulation and growth factors/cytokines secretion. MSCs have thus been the object of extensive research for decades. After completion of many pre-clinical and clinical trials, MSC-based therapy is now facing a challenging phase. Several clinical trials have reported moderate, non-durable benefits, which caused initial enthusiasm to wane, and indicated an urgent need to optimize the efficacy of therapeutic, platform-enhancing MSC-based treatment. Recent investigations suggest the presence of multiple in vivo MSC ancestors in a wide range of tissues, which contribute to the heterogeneity of the starting material for the expansion of MSCs. This variability in the MSC culture-initiating cell population, together with the different types of enrichment/isolation and cultivation protocols applied, are hampering progress in the definition of MSC-based therapies. International regulatory statements require a precise risk/benefit analysis, ensuring the safety and efficacy of treatments. GMP validation allows for quality certification, but the prediction of a clinical outcome after MSC-based therapy is correlated not only to the possible morbidity derived by cell production process, but also to the biology of the MSCs themselves, which is highly sensible to unpredictable fluctuation of isolating and culture conditions. Risk exposure and efficacy of MSC-based therapies should be evaluated by pre-clinical studies, but the batch-to-batch variability of the final medicinal product could significantly limit the predictability of these studies. The future success of MSC-based therapies could lie not only in rational optimization of therapeutic strategies, but also in a stochastic approach during the assessment of benefit and risk factors. PMID:25364757

  19. Efficient entropy estimation based on doubly stochastic models for quantized wavelet image data.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-04-01

    Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders.

  20. A Likelihood Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models

    PubMed Central

    Zimmer, Christoph; Cohen, Ted

    2017-01-01

    Stochastic transmission dynamic models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, methods for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed method, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark methods: 1) a likelihood approximation with an assumption of independent Poisson observations; 2) a particle filtering method; and 3) an ensemble Kalman filter method. We find that MSS significantly outperforms each of these three benchmark methods in the majority of epidemic scenarios tested. In summary, MSS is a promising method that may improve on current approaches for calibration and prediction using stochastic models of epidemics. PMID:28095403

  1. A Likelihood Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models.

    PubMed

    Zimmer, Christoph; Yaesoubi, Reza; Cohen, Ted

    2017-01-01

    Stochastic transmission dynamic models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, methods for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed method, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark methods: 1) a likelihood approximation with an assumption of independent Poisson observations; 2) a particle filtering method; and 3) an ensemble Kalman filter method. We find that MSS significantly outperforms each of these three benchmark methods in the majority of epidemic scenarios tested. In summary, MSS is a promising method that may improve on current approaches for calibration and prediction using stochastic models of epidemics.

  2. Comparing stochastic differential equations and agent-based modelling and simulation for early-stage cancer.

    PubMed

    Figueredo, Grazziela P; Siebers, Peer-Olaf; Owen, Markus R; Reps, Jenna; Aickelin, Uwe

    2014-01-01

    There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm.

  3. Runoff modelling using radar data and flow measurements in a stochastic state space approach.

    PubMed

    Krämer, S; Grum, M; Verworn, H R; Redder, A

    2005-01-01

    In urban drainage the estimation of runoff with the help of models is a complex task. This is in part due to the fact that rainfall, the most important input to urban drainage modelling, is highly uncertain. Added to the uncertainty of rainfall is the complexity of performing accurate flow measurements. In terms of deterministic modelling techniques these are needed for calibration and evaluation of the applied model. Therefore, the uncertainties of rainfall and flow measurements have a severe impact on the model parameters and results. To overcome these problems a new methodology has been developed which is based on simple rain plane and runoff models that are incorporated into a stochastic state space model approach. The state estimation is done by using the extended Kalman filter in combination with a maximum likelihood criterion and an off-line optimization routine. This paper presents the results of this new methodology with respect to the combined consideration of uncertainties in distributed rainfall derived from radar data and uncertainties in measured flows in an urban catchment within the Emscher river basin, Germany.

  4. Profit efficiency of physician practices: a stochastic frontier approach using panel data.

    PubMed

    Kwietniewski, Lukas; Schreyögg, Jonas

    2016-08-30

    While determinants of efficiency have been the subject of a large number of studies in the inpatient sector, relatively little is known about factors influencing efficiency of physician practices in the outpatient sector. With our study, we provide the first paper to estimate physician practice profit efficiency and its' determinants. We base our analysis on a unique panel data set of 4964 physician practices for the years 2008 to 2010. The data contains information on practice costs and revenues, services provided, as well as physician and practice characteristics. We specify the profit function of the physician practice as a translog functional form. We estimated the stochastic frontier using the comprehensive one-step approach for panel data of Battese and Coelli (1995). For estimation of the profit function, we regressed yearly profit on several inputs, outputs and input/output price relationships, while we controlled for a range of control variables such as patients' case-mix or share of patients covered by statutory health insurance. We find that participation in disease management programs and the degree of physician practice specialization are associated with significantly higher profit efficiency. In addition, our analyses show that group practices perform significantly better than single practices.

  5. All-loop calculations of total, elastic and single diffractive cross sections in RFT via the stochastic approach

    SciTech Connect

    Kolevatov, R. S.; Boreskov, K. G.

    2013-04-15

    We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.

  6. New Methods of Three-Dimensional Images Recognition Based on Stochastic Geometry and Functional Analysis

    NASA Astrophysics Data System (ADS)

    Fedotov, N. G.; Moiseev, A. V.; Syemov, A. A.; Lizunkov, V. G.; Kindaev, A. Y.

    2017-02-01

    A new approach to 3D objects recognition based on modern methods of stochastic geometry and functional analysis is proposed in the paper. A detailed mathematical description of the method developed on the approach is also presented. The 3D trace transform allows creating an invariant description of spatial objects, which better resist distortion and coordinate noise than the one, obtained as a result of the object normalization procedure, does. The ability to control properties of developed features increases intellectual capacities of the 3D trace transform significantly, which can be mentioned as its undeniable advantage. The justification of the proposed theory and mathematical model is a variety of worked out theoretical examples of hypertriplet features that have particular described properties. The paper considers in detail scan techniques of the hypertrace transform and its mathematical model as well as approaches to developing and distinguishing informative features.

  7. On a stochastic approach to a code performance estimation

    SciTech Connect

    Gorshenin, Andrey K.; Frenkel, Sergey L.

    2016-06-08

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler’s times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  8. On a stochastic approach to a code performance estimation

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.

    2016-06-01

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  9. A discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Lu, F.; Chorin, A. J.

    2015-12-01

    Prediction for a high-dimensional nonlinear dynamic system often encounters difficulties: the system may be too complicated to solve in full, and initial data may be missing because only a small subset of variables is observed. However, only a small subset of the variables may be of interest and need to be predicted. We present a solution by developing a discrete stochastic reduced system for the variables of interest, in which one formulates discrete solvable approximate equations for these variables and uses data and statistical methods to account for the impact of the other variables. The stochastic reduced system can capture the long-time statistical properties of the full system as well as the short-time dynamics, and hence make reliable predictions. A key ingredient in the construction of the stochastic reduced system is a discrete-time stochastic parametrization based on the NARMAX (nonlinear autoregression moving average with exogenous input) model. As an example, this construction is applied to the Lorenz 96 system.

  10. Stochastic dynamics of electric dipole in external electric fields: A perturbed nonlinear pendulum approach

    NASA Astrophysics Data System (ADS)

    Kapranov, Sergey V.; Kouzaev, Guennadi A.

    2013-06-01

    The motion of a dipole in external electric fields is considered in the framework of nonlinear pendulum dynamics. A stochastic layer is formed near the separatrix of the dipole pendulum in a restoring static electric field under the periodic perturbation by plane-polarized electric fields. The width of the stochastic layer depends on the direction of the forcing field variation, and this width can be evaluated as a function of perturbation frequency, amplitude, and duration. A numerical simulation of the approximate stochastic layer width of a perturbed pendulum yields a multi-peak frequency spectrum. It is described well enough at high perturbation amplitudes by an analytical estimation based on the separatrix map with an introduced expression of the most effective perturbation phase. The difference in the fractal dimensions of the phase spaces calculated geometrically and using the time-delay reconstruction is attributed to the predominant development of periodic and chaotic orbits, respectively. The correlation of the stochastic layer width with the phase space fractal dimensions is discussed.

  11. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  12. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  13. Stochastic switching in slow-fast systems: a large-fluctuation approach.

    PubMed

    Heckman, Christoffer R; Schwartz, Ira B

    2014-02-01

    In this paper we develop a perturbation method to predict the rate of occurrence of rare events for singularly perturbed stochastic systems using a probability density function approach. In contrast to a stochastic normal form approach, we model rare event occurrences due to large fluctuations probabilistically and employ a WKB ansatz to approximate their rate of occurrence. This results in the generation of a two-point boundary value problem that models the interaction of the state variables and the most likely noise force required to induce a rare event. The resulting equations of motion of describing the phenomenon are shown to be singularly perturbed. Vastly different time scales among the variables are leveraged to reduce the dimension and predict the dynamics on the slow manifold in a deterministic setting. The resulting constrained equations of motion may be used to directly compute an exponent that determines the probability of rare events. To verify the theory, a stochastic damped Duffing oscillator with three equilibrium points (two sinks separated by a saddle) is analyzed. The predicted switching time between states is computed using the optimal path that resides in an expanded phase space. We show that the exponential scaling of the switching rate as a function of system parameters agrees well with numerical simulations. Moreover, the dynamics of the original system and the reduced system via center manifolds are shown to agree in an exponentially scaling sense.

  14. Relative frequencies of constrained events in stochastic processes: An analytical approach

    NASA Astrophysics Data System (ADS)

    Rusconi, S.; Akhmatskaya, E.; Sokolovski, D.; Ballard, N.; de la Cal, J. C.

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈104 ). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  15. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  16. Stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  17. Multi-choice stochastic bi-level programming problem in cooperative nature via fuzzy programming approach

    NASA Astrophysics Data System (ADS)

    Maiti, Sumit Kumar; Roy, Sankar Kumar

    2016-05-01

    In this paper, a Multi-Choice Stochastic Bi-Level Programming Problem (MCSBLPP) is considered where all the parameters of constraints are followed by normal distribution. The cost coefficients of the objective functions are multi-choice types. At first, all the probabilistic constraints are transformed into deterministic constraints using stochastic programming approach. Further, a general transformation technique with the help of binary variables is used to transform the multi-choice type cost coefficients of the objective functions of Decision Makers(DMs). Then the transformed problem is considered as a deterministic multi-choice bi-level programming problem. Finally, a numerical example is presented to illustrate the usefulness of the paper.

  18. Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.

    2009-01-01

    Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.

  19. Non-perturbative approach for curvature perturbations in stochastic δ N formalism

    SciTech Connect

    Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro E-mail: kawasaki@icrr.u-tokyo.ac.jp

    2014-10-01

    In our previous paper [1], we have proposed a new algorithm to calculate the power spectrum of the curvature perturbations generated in inflationary universe with use of the stochastic approach. Since this algorithm does not need the perturbative expansion with respect to the inflaton fields on super-horizon scale, it works even in highly stochastic cases. For example, when the curvature perturbations are very large or the non-Gaussianities of the curvature perturbations are sizable, the perturbative expansion may break down but our algorithm enables to calculate the curvature perturbations. We apply it to two well-known inflation models, chaotic and hybrid inflation, in this paper. Especially for hybrid inflation, while the potential is very flat around the critical point and the standard perturbative computation is problematic, we successfully calculate the curvature perturbations.

  20. Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.

    2009-01-01

    Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.

  1. A non-stationary stochastic ensemble generator for radar rainfall fields based on the short-space Fourier transform

    NASA Astrophysics Data System (ADS)

    Nerini, Daniele; Besic, Nikola; Sideris, Ioannis; Germann, Urs; Foresti, Loris

    2017-06-01

    In this paper we present a non-stationary stochastic generator for radar rainfall fields based on the short-space Fourier transform (SSFT). The statistical properties of rainfall fields often exhibit significant spatial heterogeneity due to variability in the involved physical processes and influence of orographic forcing. The traditional approach to simulate stochastic rainfall fields based on the Fourier filtering of white noise is only able to reproduce the global power spectrum and spatial autocorrelation of the precipitation fields. Conceptually similar to wavelet analysis, the SSFT is a simple and effective extension of the Fourier transform developed for space-frequency localisation, which allows for using windows to better capture the local statistical structure of rainfall. The SSFT is used to generate stochastic noise and precipitation fields that replicate the local spatial correlation structure, i.e. anisotropy and correlation range, of the observed radar rainfall fields. The potential of the stochastic generator is demonstrated using four precipitation cases observed by the fourth generation of Swiss weather radars that display significant non-stationarity due to the coexistence of stratiform and convective precipitation, differential rotation of the weather system and locally varying anisotropy. The generator is verified in its ability to reproduce both the global and the local Fourier power spectra of the precipitation field. The SSFT-based stochastic generator can be applied and extended to improve the probabilistic nowcasting of precipitation, design storm simulation, stochastic numerical weather prediction (NWP) downscaling, and also for other geophysical applications involving the simulation of complex non-stationary fields.

  2. Stochastic and Deterministic Approaches to Gas-grain Modeling of Interstellar Sources

    NASA Astrophysics Data System (ADS)

    Vasyunin, Anton; Herbst, Eric; Caselli, Paola

    During the last decade, our understanding of the chemistry on surfaces of interstellar grains has been significantly enchanced. Extensive laboratory studies have revealed complex structure and dynamics in interstellar ice analogues, thus making our knowledge much more detailed. In addition, the first qualitative investigations of new processes were made, such as non-thermal chemical desorption of species from dust grains into the gas. Not surprisingly, the rapid growth of knowledge about the physics and chemistry of interstellar ices led to the development of a new generation of astrochemical models. The models are typically characterized by more detailed treatments of the ice physics and chemistry than previously. The utilized numerical approaches vary greatly from microscopic models, in which every single molecule is traced, to ``mean field'' macroscopic models, which simulate the evolution of averaged characteristics of interstellar ices, such as overall bulk composition. While microscopic models based on a stochastic Monte Carlo approach are potentially able to simulate the evolution of interstellar ices with an account of most subtle effects found in a laboratory, their use is often impractical due to limited knowledge about star-forming regions and huge computational demands. On the other hand, deterministic macroscopic models that often utilize kinetic rate equations are computationally efficient but experience difficulties in incorporation of such potentially important effects as ice segregation or discreteness of surface chemical reactions. In my talk, I will review the state of the art in the development of gas-grain astrochemical models. I will discuss how to incorporate key features of ice chemistry and dynamics in the gas-grain astrochemical models, and how the incorporation of recent laboratory findings into gas-grain models helps to better match observations.

  3. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    NASA Astrophysics Data System (ADS)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  4. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    NASA Astrophysics Data System (ADS)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  5. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Weak-signal detection based on the stochastic resonance of bistable Duffing oscillator and its application in incipient fault diagnosis

    NASA Astrophysics Data System (ADS)

    Lai, Zhi-hui; Leng, Yong-gang

    2016-12-01

    Stochastic resonance (SR) is an important approach to detect weak vibration signals from heavy background noise and further realize mechanical incipient fault diagnosis. The stochastic resonance of a bistable Duffing oscillator is limited by strict small-parameter conditions, i.e., SR can only take place under small values of signal parameters (signal amplitude, frequency, and noise intensity). We propose a method to treat the large-parameter SR for this oscillator. The linear amplitude-transformed, time/frequency scale-transformed, and parameter-adjusted methods are presented and used to produce SR for signals with large-amplitude, large-frequency and/or large-intensity noise. Furthermore, we propose the weak-signal detection approach based on large-parameter SR in the oscillator. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in incipient fault diagnosis.

  7. Stochastic resonance-enhanced laser-based particle detector.

    PubMed

    Dutta, A; Werner, C

    2009-01-01

    This paper presents a Laser-based particle detector whose response was enhanced by modulating the Laser diode with a white-noise generator. A Laser sheet was generated to cast a shadow of the object on a 200 dots per inch, 512 x 1 pixels linear sensor array. The Laser diode was modulated with a white-noise generator to achieve stochastic resonance. The white-noise generator essentially amplified the wide-bandwidth (several hundred MHz) noise produced by a reverse-biased zener diode operating in junction-breakdown mode. The gain in the amplifier in the white-noise generator was set such that the Receiver Operating Characteristics plot provided the best discriminability. A monofiber 40 AWG (approximately 80 microm) wire was detected with approximately 88% True Positive rate and approximately 19% False Positive rate in presence of white-noise modulation and with approximately 71% True Positive rate and approximately 15% False Positive rate in absence of white-noise modulation.

  8. Stochastic Complexity Based Estimation of Missing Elements in Questionnaire Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; Silander, Tomi

    A new information-theoretically justified approach to missing data estimation for multivariate categorical data was studied. The approach is a model-based imputation procedure relative to a model class (i.e., a functional form for the probability distribution of the complete data matrix), which in this case is the set of multinomial models with…

  9. Backward-stochastic-differential-equation approach to modeling of gene expression.

    PubMed

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  10. Backward-stochastic-differential-equation approach to modeling of gene expression

    NASA Astrophysics Data System (ADS)

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  11. A Q-Learning Approach to Flocking With UAVs in a Stochastic Environment.

    PubMed

    Hung, Shao-Ming; Givigi, Sidney N

    2017-01-01

    In the past two decades, unmanned aerial vehicles (UAVs) have demonstrated their efficacy in supporting both military and civilian applications, where tasks can be dull, dirty, dangerous, or simply too costly with conventional methods. Many of the applications contain tasks that can be executed in parallel, hence the natural progression is to deploy multiple UAVs working together as a force multiplier. However, to do so requires autonomous coordination among the UAVs, similar to swarming behaviors seen in animals and insects. This paper looks at flocking with small fixed-wing UAVs in the context of a model-free reinforcement learning problem. In particular, Peng's Q(λ) with a variable learning rate is employed by the followers to learn a control policy that facilitates flocking in a leader-follower topology. The problem is structured as a Markov decision process, where the agents are modeled as small fixed-wing UAVs that experience stochasticity due to disturbances such as winds and control noises, as well as weight and balance issues. Learned policies are compared to ones solved using stochastic optimal control (i.e., dynamic programming) by evaluating the average cost incurred during flight according to a cost function. Simulation results demonstrate the feasibility of the proposed learning approach at enabling agents to learn how to flock in a leader-follower topology, while operating in a nonstationary stochastic environment.

  12. Assessment of BTEX-induced health risk under multiple uncertainties at a petroleum-contaminated site: An integrated fuzzy stochastic approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Huang, Guo H.

    2011-12-01

    Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.

  13. Statistical Downscaling of Seasonal Forecasts and Climate Change Scenarios using Generalized Linear Modeling Approach for Stochastic Weather Generators

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Katz, R. W.; Rajagopalan, B.; Podesta, G. P.

    2009-12-01

    Climate forecasts and climate change scenarios are typically provided in the form of monthly or seasonally aggregated totals or means. But time series of daily weather (e.g., precipitation amount, minimum and maximum temperature) are commonly required for use in agricultural decision-making. Stochastic weather generators constitute one technique to temporally downscale such climate information. The recently introduced approach for stochastic weather generators, based generalized linear modeling (GLM), is convenient for this purpose, especially with covariates to account for seasonality and teleconnections (e.g., with the El Niño phenomenon). Yet one important limitation of stochastic weather generators is a marked tendency to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this “overdispersion” phenomenon, we incorporate time series of seasonal total precipitation and seasonal mean minimum and maximum temperature in the GLM weather generator as covariates. These seasonal time series are smoothed using locally weighted scatterplot smoothing (LOESS) to avoid introducing underdispersion. Because the aggregate variables appear explicitly in the weather generator, downscaling to daily sequences can be readily implemented. The proposed method is applied to time series of daily weather at Pergamino and Pilar in the Argentine Pampas. Seasonal precipitation and temperature forecasts produced by the International Research Institute for Climate and Society (IRI) are used as prototypes. In conjunction with the GLM weather generator, a resampling scheme is used to translate the uncertainty in the seasonal forecasts (the IRI format only specifies probabilities for three categories: below normal, near normal, and above normal) into the corresponding uncertainty for the daily weather statistics. The method is able to generate potentially useful shifts in the probability distributions of seasonally aggregated precipitation and

  14. Nonlinear Kalman filter based on duality relations between continuous and discrete-state stochastic processes.

    PubMed

    Ohkubo, Jun

    2015-10-01

    An alternative application of duality relations of stochastic processes is demonstrated. Although conventional usages of the duality relations need analytical solutions for the dual processes, here I employ numerical solutions of the dual processes and investigate the usefulness. As a demonstration, estimation problems of hidden variables in stochastic differential equations are discussed. Employing algebraic probability theory, a little complicated birth-death process is derived from the stochastic differential equations, and an estimation method based on the ensemble Kalman filter is proposed. As a result, the possibility for making faster computational algorithms based on the duality concepts is shown.

  15. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    SciTech Connect

    Karagiannis, Georgios Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.

  16. Stochastic boundary approaches to many-particle systems coupled to a particle reservoir

    NASA Astrophysics Data System (ADS)

    Taniguchi, Tooru; Sawada, Shin-ichi

    2017-01-01

    Stochastic boundary conditions for interactions with a particle reservoir are discussed in many-particle systems. We introduce the boundary conditions with the injection rate and the momentum distribution of particles coming from a particle reservoir in terms of the pressure and the temperature of the reservoir. It is shown that equilibrium ideal gases and hard-disk systems with these boundary conditions reproduce statistical-mechanical properties based on the corresponding grand canonical distributions. We also apply the stochastic boundary conditions to a hard-disk model with a steady particle current escaping from a particle reservoir in an open tube, and discuss its nonequilibrium properties such as a chemical potential dependence of the current and deviations from the local equilibrium hypothesis.

  17. Dynamic response of mechanical systems to impulse process stochastic excitations: Markov approach

    NASA Astrophysics Data System (ADS)

    Iwankiewicz, R.

    2016-05-01

    Methods for determination of the response of mechanical dynamic systems to Poisson and non-Poisson impulse process stochastic excitations are presented. Stochastic differential and integro-differential equations of motion are introduced. For systems driven by Poisson impulse process the tools of the theory of non-diffusive Markov processes are used. These are: the generalized Itô’s differential rule which allows to derive the differential equations for response moments and the forward integro-differential Chapman-Kolmogorov equation from which the equation governing the probability density of the response is obtained. The relation of Poisson impulse process problems to the theory of diffusive Markov processes is given. For systems driven by a class of non-Poisson (Erlang renewal) impulse processes an exact conversion of the original non-Markov problem into a Markov one is based on the appended Markov chain corresponding to the introduced auxiliary pure jump stochastic process. The derivation of the set of integro-differential equations for response probability density and also a moment equations technique are based on the forward integro-differential Chapman-Kolmogorov equation. An illustrating numerical example is also included.

  18. Statistical material parameters identification based on artificial neural networks for stochastic computations

    NASA Astrophysics Data System (ADS)

    Novák, Drahomír; Lehký, David

    2017-07-01

    A general methodology to obtain statistical material model parameters is presented. The procedure is based on the coupling of a stochastic simulation and an artificial neural network. The identification parameters play the role of basic random variables with a scatter reflecting the physical range of possible values. The efficient small-sample simulation method Latin Hypercube Sampling is used for the stochastic preparation of the training set utilized in training the neural network. Once the network has been trained, it represents an approximation consequently utilized in a following way: To provide the best possible set of model parameters for the given experimental data. The paper focuses the attention on the statistical inverse analysis of material model parameters where statistical moments (usually means and standard deviations) of input parameters have to be identified based on experimental data. A hierarchical statistical parameters database within the framework of reliability software is presented. The efficiency of the approach is verifiedusing numerical example of fracture-mechanical parameters determination of fiber reinforced and plain concretes.

  19. A multivariate and stochastic approach to identify key variables to rank dairy farms on profitability.

    PubMed

    Atzori, A S; Tedeschi, L O; Cannas, A

    2013-05-01

    The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21

  20. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGES

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  1. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  2. Effects of Extrinsic Mortality on the Evolution of Aging: A Stochastic Modeling Approach

    PubMed Central

    Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam

    2014-01-01

    The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non

  3. Cost and technical efficiency of physician practices: a stochastic frontier approach using panel data.

    PubMed

    Heimeshoff, Mareike; Schreyögg, Jonas; Kwietniewski, Lukas

    2014-06-01

    This is the first study to use stochastic frontier analysis to estimate both the technical and cost efficiency of physician practices. The analysis is based on panel data from 3,126 physician practices for the years 2006 through 2008. We specified the technical and cost frontiers as translog function, using the one-step approach of Battese and Coelli to detect factors that influence the efficiency of general practitioners and specialists. Variables that were not analyzed previously in this context (e.g., the degree of practice specialization) and a range of control variables such as a patients' case-mix were included in the estimation. Our results suggest that it is important to investigate both technical and cost efficiency, as results may depend on the type of efficiency analyzed. For example, the technical efficiency of group practices was significantly higher than that of solo practices, whereas the results for cost efficiency differed. This may be due to indivisibilities in expensive technical equipment, which can lead to different types of health care services being provided by different practice types (i.e., with group practices using more expensive inputs, leading to higher costs per case despite these practices being technically more efficient). Other practice characteristics such as participation in disease management programs show the same impact throughout both cost and technical efficiency: participation in disease management programs led to an increase in both, technical and cost efficiency, and may also have had positive effects on the quality of care. Future studies should take quality-related issues into account.

  4. Effects of extrinsic mortality on the evolution of aging: a stochastic modeling approach.

    PubMed

    Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam

    2014-01-01

    The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non

  5. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  6. SDU: A Semidefinite Programming-Based Underestimation Method for Stochastic Global Optimization in Protein Docking

    PubMed Central

    Paschalidis, Ioannis Ch.; Shen, Yang; Vakili, Pirooz; Vajda, Sandor

    2007-01-01

    This paper introduces a new stochastic global optimization method targeting protein-protein docking problems, an important class of problems in computational structural biology. The method is based on finding general convex quadratic underestimators to the binding energy function that is funnel-like. Finding the optimum underestimator requires solving a semidefinite programming problem, hence the name semidefinite programming-based underestimation (SDU). The underestimator is used to bias sampling in the search region. It is established that under appropriate conditions SDU locates the global energy minimum with probability approaching one as the sample size grows. A detailed comparison of SDU with a related method of convex global underestimator (CGU), and computational results for protein-protein docking problems are provided. PMID:19759849

  7. SDU: A Semidefinite Programming-Based Underestimation Method for Stochastic Global Optimization in Protein Docking.

    PubMed

    Paschalidis, Ioannis Ch; Shen, Yang; Vakili, Pirooz; Vajda, Sandor

    2007-04-01

    This paper introduces a new stochastic global optimization method targeting protein-protein docking problems, an important class of problems in computational structural biology. The method is based on finding general convex quadratic underestimators to the binding energy function that is funnel-like. Finding the optimum underestimator requires solving a semidefinite programming problem, hence the name semidefinite programming-based underestimation (SDU). The underestimator is used to bias sampling in the search region. It is established that under appropriate conditions SDU locates the global energy minimum with probability approaching one as the sample size grows. A detailed comparison of SDU with a related method of convex global underestimator (CGU), and computational results for protein-protein docking problems are provided.

  8. A copula-based stochastic generator for coupled precipitation and evaporation time series

    NASA Astrophysics Data System (ADS)

    Verhoest, Niko; Vernieuwe, Hilde; Pham, Minh Tu; Willems, Patrick; De Baets, Bernard

    2015-04-01

    In hydrologic design, one can make use of stochastic rainfall time series as input to hydrological models in order to assess extreme statistics on e.g. discharge. However, precipitation is not the only important forcing variable, also evaporation is, requiring the need for evaporation time series together with precipitation time series as input to these rainfall-runoff models. Given the fact that precipitation and evaporation are correlated, one should thus provide an evaporation time series that is not in conflict with the stochastic rainfall time series. In this presentation, a framework is developed that allows for generating coupled precipitation and evaporation time series based on vine copulas. This framework requires (1) the stochastic modelling of a precipitation time series, for which a Bartlett-Lewis model is used, (2) the stochastic modelling of a daily temperature model, for which a vine copula is built based on dependencies between daily temperature, the daily total precipitation (obtained from the Bartlett-Lewis modelled time series) and the temperature of previous day, and (3) a stochastic evaporation model, based on a vine copula that makes use of precipitation statistics (from the Bartlett-Lewis modelled time series) and daily temperature (based on the stochastic temperature model). The models are calibrated based on 10-minute precipitation, daily temperature and daily evaporation records from a 72-year period are available at Uccle (Belgium). Based on ensemble statistics, the models are evaluated and uncertainty assessments are made.

  9. Multidimensional characterization of stochastic dynamical systems based on multiple perturbations and measurements

    SciTech Connect

    Kryvohuz, Maksym Mukamel, Shaul

    2015-06-07

    Generalized nonlinear response theory is presented for stochastic dynamical systems. Experiments in which multiple measurements of dynamical quantities are used along with multiple perturbations of parameters of dynamical systems are described by generalized response functions (GRFs). These constitute a new type of multidimensional measures of stochastic dynamics either in the time or the frequency domains. Closed expressions for GRFs in stochastic dynamical systems are derived and compared with numerical non-equilibrium simulations. Several types of perturbations are considered: impulsive and periodic perturbations of temperature and impulsive perturbations of coordinates. The present approach can be used to study various types of stochastic processes ranging from single-molecule conformational dynamics to chemical kinetics of finite-size reactors such as biocells.

  10. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  11. Acceleration of stochastic seismic inversion in OpenCL-based heterogeneous platforms

    NASA Astrophysics Data System (ADS)

    Ferreirinha, Tomás; Nunes, Rúben; Azevedo, Leonardo; Soares, Amílcar; Pratas, Frederico; Tomás, Pedro; Roma, Nuno

    2015-05-01

    Seismic inversion is an established approach to model the geophysical characteristics of oil and gas reservoirs, being one of the basis of the decision making process in the oil&gas exploration industry. However, the required accuracy levels can only be attained by dealing and processing significant amounts of data, often leading to consequently long execution times. To overcome this issue and to allow the development of larger and higher resolution elastic models of the subsurface, a novel parallelization approach is herein proposed targeting the exploitation of GPU-based heterogeneous systems based on a unified OpenCL programming framework, to accelerate a state of art Stochastic Seismic Amplitude versus Offset Inversion algorithm. To increase the parallelization opportunities while ensuring model fidelity, the proposed approach is based on a careful and selective relaxation of some spatial dependencies. Furthermore, to take into consideration the heterogeneity of modern computing systems, usually composed of several and different accelerating devices, multi-device parallelization strategies are also proposed. When executed in a dual-GPU system, the proposed approach allows reducing the execution time in up to 30 times, without compromising the quality of the obtained models.

  12. Stochastic Extended LQR for Optimization-based Motion Planning Under Uncertainty

    PubMed Central

    Sun, Wen; van den Berg, Jur; Alterovitz, Ron

    2016-01-01

    We introduce a novel optimization-based motion planner, Stochastic Extended LQR (SELQR), which computes a trajectory and associated linear control policy with the objective of minimizing the expected value of a user-defined cost function. SELQR applies to robotic systems that have stochastic non-linear dynamics with motion uncertainty modeled by Gaussian distributions that can be state- and control-dependent. In each iteration, SELQR uses a combination of forward and backward value iteration to estimate the cost-to-come and the cost-to-go for each state along a trajectory. SELQR then locally optimizes each state along the trajectory at each iteration to minimize the expected total cost, which results in smoothed states that are used for dynamics linearization and cost function quadratization. SELQR progressively improves the approximation of the expected total cost, resulting in higher quality plans. For applications with imperfect sensing, we extend SELQR to plan in the robot's belief space. We show that our iterative approach achieves fast and reliable convergence to high-quality plans in multiple simulated scenarios involving a car-like robot, a quadrotor, and a medical steerable needle performing a liver biopsy procedure. PMID:28163662

  13. Stochastic Extended LQR for Optimization-based Motion Planning Under Uncertainty.

    PubMed

    Sun, Wen; van den Berg, Jur; Alterovitz, Ron

    2016-04-01

    We introduce a novel optimization-based motion planner, Stochastic Extended LQR (SELQR), which computes a trajectory and associated linear control policy with the objective of minimizing the expected value of a user-defined cost function. SELQR applies to robotic systems that have stochastic non-linear dynamics with motion uncertainty modeled by Gaussian distributions that can be state- and control-dependent. In each iteration, SELQR uses a combination of forward and backward value iteration to estimate the cost-to-come and the cost-to-go for each state along a trajectory. SELQR then locally optimizes each state along the trajectory at each iteration to minimize the expected total cost, which results in smoothed states that are used for dynamics linearization and cost function quadratization. SELQR progressively improves the approximation of the expected total cost, resulting in higher quality plans. For applications with imperfect sensing, we extend SELQR to plan in the robot's belief space. We show that our iterative approach achieves fast and reliable convergence to high-quality plans in multiple simulated scenarios involving a car-like robot, a quadrotor, and a medical steerable needle performing a liver biopsy procedure.

  14. Streamer inception from hydrometeors as a stochastic process with a particle-based model

    NASA Astrophysics Data System (ADS)

    Rutjes, Casper; Dubinova, Anna; Ebert, Ute; Teunissen, Jannis; Buitink, Stijn; Scholten, Olaf; Trihn, Gia

    2017-04-01

    In thunderstorms, streamers (as precursors for lightning leaders) can be initiated from hydrometeors (droplets, graupel, ice needles, etc.) which enhance the thundercloud electric field to values above electric breakdown; and initial electrons may come from extensive air showers [1]. Typically, streamer inception from hydrometeors is theoretically studied with deterministic fluid simulations (i.e. drift-diffusion-reaction coupled with Poisson), see [1, 2, 3] and references therein. However, electrons will only multiply in the area above breakdown, which is of the order of a cubic millimeter for hydrometeors of sub-centimeter scale. Initial electron densities, even in extreme extensive air shower events, do not exceed 10 per cubic millimeter. Hence only individual electron avalanches - with their intrinsically random nature - are entering the breakdown area sequentially. On these scales, a deterministic fluid description is thus not valid. Therefore, we developed a new stochastic particle-based model to study the behavior of the system described above, to calculate the probability of streamer inception, for given hydrometeor, electric field and initial electron density. Results show that the discharge starts with great jitter and usually off the symmetry axis, demanding stochastic approach in full 3D for streamer inception in realistic thunderstorm conditions. The developed software will be made publically available as an open source project. [1] Dubinova et al. 2015. Phys. Rev. Lett. 115(1), 015002. [2] Liu et al. 2012. Phys. Rev. Lett. 109(2), 025002. [3] Babich et al. 2016. J. Geophys. Res. Atmos. 121, 6393-6403.

  15. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    SciTech Connect

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.

  16. Consentaneous Agent-Based and Stochastic Model of the Financial Markets

    PubMed Central

    Gontis, Vygintas; Kononovicius, Aleksejus

    2014-01-01

    We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation. PMID:25029364

  17. Consentaneous agent-based and stochastic model of the financial markets.

    PubMed

    Gontis, Vygintas; Kononovicius, Aleksejus

    2014-01-01

    We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.

  18. A stochastic approach for quantifying immigrant integration: the Spanish test case

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  19. The Influence of Ecohydrologic Dynamics on Landscape Evolution: a Stochastic Approach

    NASA Astrophysics Data System (ADS)

    Deal, E.; Favre Pugin, A. C.; Botter, G.; Braun, J.

    2015-12-01

    The stream power incision model (SPIM) has a long history of use in modeling landscape evolution. Despite simplifications made in its formulation, it has emerged over the last 30 years as a powerful tool to interpret the histories of tectonically active landscapes and to understand how they evolve over millions of years. However, intense interest in the relationship between climate and erosion has revealed that the standard SPIM has some significant shortcomings. First, it fails to account for the role of erosion thresholds, which have been shown to be important and require an approach that addresses the variable or stochastic nature of erosion processes and drivers. Second, the standard SPIM does not address the influence of catchment hydrology, which modulates the incoming precipitation to produce discharge that in turn drives fluvial erosion. Hydrological processes alter in particular the frequency and magnitude of extreme events which are highly relevant for landscape erosion. To address these weaknesses we introduce a new analytical stochastic-threshold formulation of the stream power incision model that is driven by probabilistic hydrology. The hydrological model incorporates a stochastic description of soil moisture which takes into account the random nature of the rainfall forcing and the dynamics of the soil layer. The soil layer dynamics include infiltration and evapotranspiration which are both modelled as being dependent on the time varying soil moisture level (state dependent). The stochastic approach allows us to integrate these effects over long periods of time to understand their influence on the longterm average erosion rate without the need to explicitly model processes on the short timescales where they are relevant. Our model can therefore represent the role of soil properties (thickness, porosity) and vegetation (through evapotranspiration rates) in the longterm catchment-wide water balance, and in turn the longterm erosion rate. We identify

  20. Comparison of damage localization in mechanical systems based on Stochastic Subspace Identification method.

    NASA Astrophysics Data System (ADS)

    Gautier, Guillaume; Delwar Hossain Bhuyan, Md; D öhler, Michael; Mevel, Laurent

    2017-04-01

    Damage identification in mechanical systems under vibration excitation relates to the monitoring of the changes in the dynamical properties of the corresponding linear system, and thus reflects changes in modal parameters (frequencies, damping, mode shapes) and finally in the finite element model of the structure [1]. Damage localization can be performed using ambient vibration data collected from sensors in the reference and possibly damaged state and information from a finite element model (FEM). Two approaches are considered in this framework, the Stochastic Dynamic Damage Location Vector (SDDLV) approach [2, 3] and the Subspace Fitting (SF) approach [4, 5]. The SDDLV is based on finite element (FE) model of the structure and modal parameters estimated from measurements in both reference and damaged states. From the measurements, a load vector is computed in the kernel of the transfer matrix difference between both states and then applied to the FE model of the structure. This load vector leads to zero (or close to zero) stress over the damaged elements. A joint statistical evaluation has been proposed, where several stress estimates and their uncertainties are computed from multiple mode sets and different Laplace variables for robustness of the approach. SF approach is a finite element model updating. The approach makes use of subspace-based system identification, where an observability matrix is estimated from vibration measurements. Finite element model updating is performed by correlating a finite element model observability matrix with the estimated one. SF is applied to damage localization where damages are assumed to be modeled in terms of mean variations of element stiffness matrices. Localization algorithm is improved by taking into account the estimation uncertainties of the underlying finite element model parameters. Both localization algorithms are presented and their performance is illustrated and compared on simulated and experimental vibration

  1. A stochastic wind turbine wake model based on new metrics for wake characterization: A stochastic wind turbine wake model based on new metrics for wake characterization

    SciTech Connect

    Doubrawa, Paula; Barthelmie, Rebecca J.; Wang, Hui; Churchfield, Matthew J.

    2016-08-04

    Understanding the detailed dynamics of wind turbine wakes is critical to predicting the performance and maximizing the efficiency of wind farms. This knowledge requires atmospheric data at a high spatial and temporal resolution, which are not easily obtained from direct measurements. Therefore, research is often based on numerical models, which vary in fidelity and computational cost. The simplest models produce axisymmetric wakes and are only valid beyond the near wake. Higher-fidelity results can be obtained by solving the filtered Navier-Stokes equations at a resolution that is sufficient to resolve the relevant turbulence scales. This work addresses the gap between these two extremes by proposing a stochastic model that produces an unsteady asymmetric wake. The model is developed based on a large-eddy simulation (LES) of an offshore wind farm. Because there are several ways of characterizing wakes, the first part of this work explores different approaches to defining global wake characteristics. From these, a model is developed that captures essential features of a LES-generated wake at a small fraction of the cost. The synthetic wake successfully reproduces the mean characteristics of the original LES wake, including its area and stretching patterns, and statistics of the mean azimuthal radius. The mean and standard deviation of the wake width and height are also reproduced. This preliminary study focuses on reproducing the wake shape, while future work will incorporate velocity deficit and meandering, as well as different stability scenarios.

  2. Characterization of heterogeneous near-surface materials by joint stochastic approach

    NASA Astrophysics Data System (ADS)

    Girard, J.-F.; Roulle, A.; Grandjean, G.; Bitri, A.; Lalande, J.-M.

    2009-04-01

    Using several geophysical methods to better constrain a diagnosis is a standard approach in many field studies. Generally data of each method are individually inverted and a global geological / hydro-geological interpretation is realized as a second step from separated inversion results. Thereby, contributions and limitations of each method to final interpretation are integrated after the inversion processes. Consequently, the expertise of the one who performs the interpretation is decisive and its weight on the final result is difficult to quantify. In the end, reliability of geophysical interpretation is mainly limited by the problem of non-uniqueness of solution, on one hand because of equivalency of some models and intrinsic methods resolutions, and on the other hand because the links between geophysical parameters and rock physics properties in heterogeneous media are not so straight and do not allow to clearly discriminate between two materials (or state of weathering, or water saturation, etc…). Following the work of many authors in the past twenty years, we propose to jointly inverse several data types simultaneously in order to better constraint the inverse problem. We selected a set of geophysical methods, widely used to investigate the subsurface: vertical electrical sounding (VES), time domain electromagnetism (TEM), magnetic resonance sounding (MRS) and multi-channel analysis of surface waves (MASW). We particularly insist on how to objectively introduce the a-priori knowledge as an input in the algorithm. For a given geological environment, one can define a few hydrogeological facies each described by a statistical (normal, log-normal) distribution of classical geophysical parameters (electrical resistivity, water content, decay time, shear wave velocity). To explore the model space, we propose a stochastic approach based on the Metropolis algorithm in order to provide a statistical estimation on the result uncertainties (due to the data quality

  3. A deterministic-stochastic approach to compute the Boltzmann collision integral in O(MN) operations

    NASA Astrophysics Data System (ADS)

    Alekseenko, Alexander; Nguyen, Truong; Wood, Aihua

    2016-11-01

    We developed and implemented a numerical algorithm for evaluating the Boltzmann collision operator with O(MN) operations, where N is the number of the discrete velocity points and M < N. The approach is formulated using a bilinear convolution form of the Galerkin projection of the collision operator and discontinuous Galerkin (DG) discretizations of the collision operator. Key ingredients of the new approach are singular value decomposition (SVD) compression of the collision kernel and approximations of the solution by a sum of Maxwellian streams using a stochastic likelihood maximization algorithm. The developed method is significantly faster than the full deterministic DG velocity discretization of the collision integral. Accuracy of the method is established on solutions to the problem of spatially homogeneous relaxation.

  4. Phase transitions of macromolecular microsphere composite hydrogels based on the stochastic Cahn–Hilliard equation

    SciTech Connect

    Li, Xiao Ji, Guanghua Zhang, Hui

    2015-02-15

    We use the stochastic Cahn–Hilliard equation to simulate the phase transitions of the macromolecular microsphere composite (MMC) hydrogels under a random disturbance. Based on the Flory–Huggins lattice model and the Boltzmann entropy theorem, we develop a reticular free energy suit for the network structure of MMC hydrogels. Taking the random factor into account, with the time-dependent Ginzburg-Landau (TDGL) mesoscopic simulation method, we set up a stochastic Cahn–Hilliard equation, designated herein as the MMC-TDGL equation. The stochastic term in the equation is constructed appropriately to satisfy the fluctuation-dissipation theorem and is discretized on a spatial grid for the simulation. A semi-implicit difference scheme is adopted to numerically solve the MMC-TDGL equation. Some numerical experiments are performed with different parameters. The results are consistent with the physical phenomenon, which verifies the good simulation of the stochastic term.

  5. Lagrangian filtered density function for LES-based stochastic modelling of turbulent particle-laden flows

    NASA Astrophysics Data System (ADS)

    Innocenti, Alessio; Marchioli, Cristian; Chibbaro, Sergio

    2016-11-01

    The Eulerian-Lagrangian approach based on Large-Eddy Simulation (LES) is one of the most promising and viable numerical tools to study particle-laden turbulent flows, when the computational cost of Direct Numerical Simulation (DNS) becomes too expensive. The applicability of this approach is however limited if the effects of the Sub-Grid Scales (SGSs) of the flow on particle dynamics are neglected. In this paper, we propose to take these effects into account by means of a Lagrangian stochastic SGS model for the equations of particle motion. The model extends to particle-laden flows the velocity-filtered density function method originally developed for reactive flows. The underlying filtered density function is simulated through a Lagrangian Monte Carlo procedure that solves a set of Stochastic Differential Equations (SDEs) along individual particle trajectories. The resulting model is tested for the reference case of turbulent channel flow, using a hybrid algorithm in which the fluid velocity field is provided by LES and then used to advance the SDEs in time. The model consistency is assessed in the limit of particles with zero inertia, when "duplicate fields" are available from both the Eulerian LES and the Lagrangian tracking. Tests with inertial particles were performed to examine the capability of the model to capture the particle preferential concentration and near-wall segregation. Upon comparison with DNS-based statistics, our results show improved accuracy and considerably reduced errors with respect to the case in which no SGS model is used in the equations of particle motion.

  6. Ground Movement Analysis Based on Stochastic Medium Theory

    PubMed Central

    Fei, Meng; Li-chun, Wu; Jia-sheng, Zhang; Guo-dong, Deng; Zhi-hui, Ni

    2014-01-01

    In order to calculate the ground movement induced by displacement piles driven into horizontal layered strata, an axisymmetric model was built and then the vertical and horizontal ground movement functions were deduced using stochastic medium theory. Results show that the vertical ground movement obeys normal distribution function, while the horizontal ground movement is an exponential function. Utilizing field measured data, parameters of these functions can be obtained by back analysis, and an example was employed to verify this model. Result shows that stochastic medium theory is suitable for calculating the ground movement in pile driving, and there is no need to consider the constitutive model of soil or contact between pile and soil. This method is applicable in practice. PMID:24701184

  7. A nonparametric stochastic optimizer for TDMA-based neuronal signaling.

    PubMed

    Suzuki, Junichi; Phan, Dũng H; Budiman, Harry

    2014-09-01

    This paper considers neurons as a physical communication medium for intrabody networks of nano/micro-scale machines and formulates a noisy multiobjective optimization problem for a Time Division Multiple Access (TDMA) communication protocol atop the physical layer. The problem is to find the Pareto-optimal TDMA configurations that maximize communication performance (e.g., latency) by multiplexing a given neuronal network to parallelize signal transmissions while maximizing communication robustness (i.e., unlikeliness of signal interference) against noise in neuronal signaling. Using a nonparametric significance test, the proposed stochastic optimizer is designed to statistically determine the superior-inferior relationship between given two solution candidates and seek the optimal trade-offs among communication performance and robustness objectives. Simulation results show that the proposed optimizer efficiently obtains quality TDMA configurations in noisy environments and outperforms existing noise-aware stochastic optimizers.

  8. Dynamic similarity approach for more robust structural health monitoring in nonlinear, nonstationary and stochastic systems

    NASA Astrophysics Data System (ADS)

    Nataraju, Madhura; Johnson, Timothy J.; Adams, Douglas E.

    2003-07-01

    Environmental and operational variability due to changes in the excitation or any other variable can mimic or altogether obscure evidence of structural defects in measured data leading to false positive/negative diagnoses of damage and conservative/tolerant predictions of remaining useful life in structural health monitoring system. Diagnostic and prognostic errors like these in many types of commercial and defense-related applications must be eliminated if health monitoring is to be widely implemented in these applications. A theoretical framework of "dynamic similiarity" in which two sets of mathematical operators are utilized in one system/data model to distinguish damage from nonlinear, time-varying and stochastic events in the measured data is discussed in this paper. Because structural damage initiation, evolution and accumulation are nonlinear processes, the challenge here is to distinguish damage from nonlinear, time-varying and stochastic events in the measured data is discussed in this paper. Because structural damage initiation, evolution and accumulation are nonlinear processes, the challenge here is to distinguish abnormal from normal nonlinear dynamics, which are accentuated by physically or statistically non-stationary events in the operating environment. After discussing several examples of structural diagnosis and prognosis involving dynamic similarity, a simplifeid numerical finite element model of a helicopter blade with time-varying flexural stiffness on a nonlinear aerodynamic elastic foundation that is subjected to a stochastic base excitation is utilized to introduce and examine the effects of dynamic similarity on health monitoring systems. It is shown that environmental variability can be distinguished from structural damage using a physics-based model in conjunction with the dynamic similarity operators to develop more robust damage detection algorithms, which may prove to be more accurate and precise when operating conditions fluctuate.

  9. Fixation of Cs to marine sediments estimated by a stochastic modelling approach.

    PubMed

    Børretzen, Peer; Salbu, Brit

    2002-01-01

    irreversible sediment phase. while about 12.5 years are needed before 99.7% of the Cs ions are fixed. Thus, according to the model estimates the contact time between 137Cs ions leached from dumped waste and the Stepovogo Fjord sediment should be about 3 years before the sediment will act as an efficient permanent sink. Until then a significant fraction of 137Cs should be considered mobile. The stochastic modelling approach provides useful tools when assessing sediment-seawater interactions over time, and should be easily applicable to all sediment-seawater systems including a sink term.

  10. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  11. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  12. Stochastic rainfall modeling in West Africa: Parsimonious approaches for domestic rainwater harvesting assessment

    NASA Astrophysics Data System (ADS)

    Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.

    2008-10-01

    SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.

  13. Stochastic Modeling Approach for the Evaluation of Backbreak due to Blasting Operations in Open Pit Mines

    NASA Astrophysics Data System (ADS)

    Sari, Mehmet; Ghasemi, Ebrahim; Ataei, Mohammad

    2014-03-01

    Backbreak is an undesirable side effect of bench blasting operations in open pit mines. A large number of parameters affect backbreak, including controllable parameters (such as blast design parameters and explosive characteristics) and uncontrollable parameters (such as rock and discontinuities properties). The complexity of the backbreak phenomenon and the uncertainty in terms of the impact of various parameters makes its prediction very difficult. The aim of this paper is to determine the suitability of the stochastic modeling approach for the prediction of backbreak and to assess the influence of controllable parameters on the phenomenon. To achieve this, a database containing actual measured backbreak occurrences and the major effective controllable parameters on backbreak (i.e., burden, spacing, stemming length, powder factor, and geometric stiffness ratio) was created from 175 blasting events in the Sungun copper mine, Iran. From this database, first, a new site-specific empirical equation for predicting backbreak was developed using multiple regression analysis. Then, the backbreak phenomenon was simulated by the Monte Carlo (MC) method. The results reveal that stochastic modeling is a good means of modeling and evaluating the effects of the variability of blasting parameters on backbreak. Thus, the developed model is suitable for practical use in the Sungun copper mine. Finally, a sensitivity analysis showed that stemming length is the most important parameter in controlling backbreak.

  14. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic

  15. Impact of Geological Characterization Uncertainties on Subsurface Flow & Transport Using a Stochastic Discrete Fracture Network Approach

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.

    2009-12-01

    Fractures and fracture networks are the principal pathways for transport of water and contaminants in groundwater systems, enhanced geothermal system fluids, migration of oil and gas, carbon dioxide leakage from carbon sequestration sites, and of radioactive and toxic industrial wastes from underground storage repositories. A major issue to overcome when characterizing a fractured reservoir is that of data limitation due to accessibility and affordability. Moreover, the ability to map discontinuities in the rock with available geological and geophysical tools tends to decrease particularly as the scale of the discontinuity goes down. Geological characterization data include measurements of fracture density, orientation, extent, and aperture, and are based on analysis of outcrops, borehole optical and acoustic televiewer logs, aerial photographs, and core samples, among other techniques. All of these measurements are taken at the field scale through a very sparse limited number of deep boreholes. These types of data are often reduced to probability distribution functions for predictive modeling and simulation in a stochastic framework such as a stochastic discrete fracture network. Stochastic discrete fracture network models enable, through Monte Carlo realizations and simulations, probabilistic assessment of flow and transport phenomena that are not adequately captured using continuum models. Despite the fundamental uncertainties inherited within the probabilistic reduction of the sparse data collected, very little work has been conducted on quantifying uncertainty on the reduced probabilistic distribution functions. In the current study, using nested Monte Carlo simulations, we present the impact of parameter uncertainties of the distribution functions of fracture density, orientation, aperture and size on the flow and transport using topological measures such as fracture connectivity, physical characteristics such as effective hydraulic conductivity tensors, and

  16. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14

  17. Stochastic dynamic optimization approach for revegetation of reclaimed mine soils under uncertain weather regime

    SciTech Connect

    Mustafa, G.

    1989-01-01

    This study presents a comprehensive physically based stochastic dynamic optimization model to assist planners in making decisions concerning mine soil depths and soil mixture ratios required to achieve successful revegetation of mine lands at different probability levels of success, subject to an uncertain weather regime. A perennial grass growth model was modified and validated for predicting vegetation growth in reclaimed mine soils. The plant growth model is based on continuous relationships between plant growth, air temperature, dry length, leaf area, photoperiod and plant-soil-moisture stresses. A plant available soil moisture model was adopted to estimate daily soil moisture for mine soils. A general probability model was developed to estimate the probability of successful revegetation in a 5-year bond release period. The probability model considers five possible bond release criteria in mine soil reclamation planning. A stochastic dynamic optimization model (SDOM) was developed to find the optimum combination of soil depth and soil mixture ratios that met the successful vegetation standard under non-irrigated conditions with weather as the only random element of the system. The SDOM was applied for Wise County, Virginia, and the model found that 2:1 sandstone/siltstone soil mixture required the minimum soil depth to achieve successful revegetation. These results were also supported by field data. The developed model allows the planners to better manage lands drastically disturbed by surface mining.

  18. A stochastic context free grammar based framework for analysis of protein sequences

    PubMed Central

    Dyrka, Witold; Nebel, Jean-Christophe

    2009-01-01

    Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free Grammar based framework has been

  19. Stochastic investigation of rock anisotropy based on the climacogram

    NASA Astrophysics Data System (ADS)

    Dimitriadis, Panayiotis; Tzouka, Katerina; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Anisotropy plays an important role on rock properties and entails valuable information for many fields of applied geology and engineering. Many methods are developed in order to detect transitions from isotropy to anisotropy but as a scale-depended effect, anisotropy also needs to be determined in multiple scales. We investigate the application of a stochastic tool, the climacogram (i.e., variance of the averaged process vs. scale) to characterize anisotropy in rocks at different length scales through image processing. The data are pictures from laboratory, specifically thin sections, and pictures of rock samples and rock formations in the field in order to examine anisotropy in nano, micro and macroscale.

  20. Strategy based on information entropy for optimizing stochastic functions.

    PubMed

    Schmidt, Tobias Christian; Ries, Harald; Spirkl, Wolfgang

    2007-02-01

    We propose a method for the global optimization of stochastic functions. During the course of the optimization, a probability distribution is built up for the location and the value of the global optimum. The concept of information entropy is used to make the optimization as efficient as possible. The entropy measures the information content of a probability distribution, and thus gives a criterion for decisions: From several possibilities we choose the one which yields the most information concerning location and value of the global maximum sought.

  1. The impact of trade costs on rare earth exports : a stochastic frontier estimation approach.

    SciTech Connect

    Sanyal, Prabuddha; Brady, Patrick Vane; Vugrin, Eric D.

    2013-09-01

    The study develops a novel stochastic frontier modeling approach to the gravity equation for rare earth element (REE) trade between China and its trading partners between 2001 and 2009. The novelty lies in differentiating betweenbehind the border' trade costs by China and theimplicit beyond the border costs' of China's trading partners. Results indicate that the significance level of the independent variables change dramatically over the time period. While geographical distance matters for trade flows in both periods, the effect of income on trade flows is significantly attenuated, possibly capturing the negative effects of financial crises in the developed world. Second, the total export losses due tobehind the border' trade costs almost tripled over the time period. Finally, looking atimplicit beyond the border' trade costs, results show China gaining in some markets, although it is likely that some countries are substituting away from Chinese REE exports.

  2. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations.

    PubMed

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of

  3. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    NASA Astrophysics Data System (ADS)

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J. Andrew

    2016-08-01

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of

  4. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes

    PubMed Central

    Hahl, Sayuri K.; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  5. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  6. Non-Markovian stochastic Liouville equation and its Markovian representation: Extensions of the continuous-time random-walk approach.

    PubMed

    Shushin, A I

    2008-03-01

    Some specific features and extensions of the continuous-time random-walk (CTRW) approach are analyzed in detail within the Markovian representation (MR) and CTRW-based non-Markovian stochastic Liouville equation (SLE). In the MR, CTRW processes are represented by multidimensional Markovian ones. In this representation the probability density function (PDF) W(t) of fluctuation renewals is associated with that of reoccurrences in a certain jump state of some Markovian controlling process. Within the MR the non-Markovian SLE, which describes the effect of CTRW-like noise on the relaxation of dynamic and stochastic systems, is generalized to take into account the influence of relaxing systems on the statistical properties of noise. Some applications of the generalized non-Markovian SLE are discussed. In particular, it is applied to study two modifications of the CTRW approach. One of them considers cascaded CTRWs in which the controlling process is actually a CTRW-like one controlled by another CTRW process, controlled in turn by a third one, etc. Within the MR a simple expression for the PDF W(t) of the total controlling process is obtained in terms of Markovian variants of controlling PDFs in the cascade. The expression is shown to be especially simple and instructive in the case of anomalous processes determined by the long-time tailed W(t) . The cascaded CTRWs can model the effect of the complexity of a system on the relaxation kinetics (in glasses, fractals, branching media, ultrametric structures, etc.). Another CTRW modification describes the kinetics of processes governed by fluctuating W(t) . Within the MR the problem is analyzed in a general form without restrictive assumptions on the correlations of PDFs of consecutive renewals. The analysis shows that fluctuations of W(t) can strongly affect the kinetics of the process. Possible manifestations of this effect are discussed.

  7. Stochastic modeling of rainfall

    SciTech Connect

    Guttorp, P.

    1996-12-31

    We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.

  8. Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach

    DTIC Science & Technology

    2012-09-30

    COVERED - 4 . TITLE AND SUBTITLE Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach 5a. CONTRACT...mwheeler/maproom/RMM/ 4 • As the Madden-Julian oscillation (MJO) moves eastward from the Indian to the Pacific ocean, it typically accelerates, becomes

  9. Stochastic generation of daily rainfall events based on rainfall pattern classification and Copula-based rainfall characteristics simulation

    NASA Astrophysics Data System (ADS)

    Xu, Y. P.; Gao, C.

    2016-12-01

    To deal with the problem of having no or insufficiently long rainfall record, developing a stochastic rainfall model is very essential. This study first proposed a stochastic model of daily rainfall events based on classification and simulation of different rainfall patterns, and copula-based joint simulation of rainfall characteristics. Compared with current stochastic rainfall models, this new model not only keeps the dependence structure of rainfall characteristics by using copula functions, but also takes various rainfall patterns that may cause different hydrological responses to watershed into consideration. In order to determine the appropriate number of representative rainfall patterns in an objective way, we also introduced clustering validation measures to the stochastic model. Afterwards, the developed stochastic rainfall model is applied to 39 gauged meteorological stations in Zhejiang province, East China, and is then extended to ungauged stations for validation by applying the self-organizing map (SOM) method. The final results show that the 39 stations can be classified into seven regions that further fall into three categories based on rainfall generation mechanisms, i.e., plum-rain control region, typhoon-rain control region and typhoon-plum-rain compatible region. Rainfall patterns of each station can be classified into five or six types based on clustering validation measures. This study shows that the stochastic rainfall model is robust and can be applied to both gauged and ungauged stations for generating long rainfall record.

  10. Stochastic volatility of the futures prices of emission allowances: A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin

    2017-01-01

    Understanding the stochastic nature of the spot volatility of emission allowances is crucial for risk management in emissions markets. In this study, by adopting a stochastic volatility model with or without jumps to represent the dynamics of European Union Allowances (EUA) futures prices, we estimate the daily volatilities and model parameters by using the Markov Chain Monte Carlo method for stochastic volatility (SV), stochastic volatility with return jumps (SVJ) and stochastic volatility with correlated jumps (SVCJ) models. Our empirical results reveal three important features of emissions markets. First, the data presented herein suggest that EUA futures prices exhibit significant stochastic volatility. Second, the leverage effect is noticeable regardless of whether or not jumps are included. Third, the inclusion of jumps has a significant impact on the estimation of the volatility dynamics. Finally, the market becomes very volatile and large jumps occur at the beginning of a new phase. These findings are important for policy makers and regulators.

  11. Stochastic Modeling of Usage Patterns in a Web-Based Information System.

    ERIC Educational Resources Information Center

    Chen, Hui-Min; Cooper, Michael D.

    2002-01-01

    Uses continuous-time stochastic models, mainly based on semi-Markov chains, to derive user state transition patterns, both in rates and in probabilities, in a Web-based information system. Describes search sessions from transaction logs of the University of California's MELVYL library catalog system and discusses sequential dependency. (Author/LRW)

  12. Path integral approach to closed-form option pricing formulas with applications to stochastic volatility and interest rate models.

    PubMed

    Lemmens, D; Wouters, M; Tempere, J; Foulon, S

    2008-07-01

    We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.

  13. Path integral approach to closed-form option pricing formulas with applications to stochastic volatility and interest rate models

    NASA Astrophysics Data System (ADS)

    Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.

    2008-07-01

    We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.

  14. A New Approach to Predict Microbial Community Assembly and Function Using a Stochastic, Genome-Enabled Modeling Framework

    NASA Astrophysics Data System (ADS)

    King, E.; Brodie, E.; Anantharaman, K.; Karaoz, U.; Bouskill, N.; Banfield, J. F.; Steefel, C. I.; Molins, S.

    2016-12-01

    Characterizing and predicting the microbial and chemical compositions of subsurface aquatic systems necessitates an understanding of the metabolism and physiology of organisms that are often uncultured or studied under conditions not relevant for one's environment of interest. Cultivation-independent approaches are therefore important and have greatly enhanced our ability to characterize functional microbial diversity. The capability to reconstruct genomes representing thousands of populations from microbial communities using metagenomic techniques provides a foundation for development of predictive models for community structure and function. Here, we discuss a genome-informed stochastic trait-based model incorporated into a reactive transport framework to represent the activities of coupled guilds of hypothetical microorganisms. Metabolic pathways for each microbe within a functional guild are parameterized from metagenomic data with a unique combination of traits governing organism fitness under dynamic environmental conditions. We simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for cellular maintenance, respiration, biomass development, and enzyme production. While `omics analyses can now characterize the metabolic potential of microbial communities, it is functionally redundant as well as computationally prohibitive to explicitly include the thousands of recovered organisms into biogeochemical models. However, one can derive potential metabolic pathways from genomes along with trait-linkages to build probability distributions of traits. These distributions are used to assemble groups of microbes that couple one or more of these pathways. From the initial ensemble of microbes, only a subset will persist based on the interaction of their physiological and metabolic traits with environmental conditions, competing organisms, etc. Here, we analyze the predicted niches of these hypothetical microbes and

  15. Stochastic population forecasting based on combinations of expert evaluations within the Bayesian paradigm.

    PubMed

    Billari, Francesco C; Graziani, Rebecca; Melilli, Eugenio

    2014-10-01

    This article suggests a procedure to derive stochastic population forecasts adopting an expert-based approach. As in previous work by Billari et al. (2012), experts are required to provide evaluations, in the form of conditional and unconditional scenarios, on summary indicators of the demographic components determining the population evolution: that is, fertility, mortality, and migration. Here, two main purposes are pursued. First, the demographic components are allowed to have some kind of dependence. Second, as a result of the existence of a body of shared information, possible correlations among experts are taken into account. In both cases, the dependence structure is not imposed by the researcher but rather is indirectly derived through the scenarios elicited from the experts. To address these issues, the method is based on a mixture model, within the so-called Supra-Bayesian approach, according to which expert evaluations are treated as data. The derived posterior distribution for the demographic indicators of interest is used as forecasting distribution, and a Markov chain Monte Carlo algorithm is designed to approximate this posterior. This article provides the questionnaire designed by the authors to collect expert opinions. Finally, an application to the forecast of the Italian population from 2010 to 2065 is proposed.

  16. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  17. A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach for contaminated sites management.

    PubMed

    Hu, Yan; Wen, Jing-Ya; Li, Xiao-Li; Wang, Da-Zhou; Li, Yu

    2013-10-15

    A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach was developed for contaminated sites management. The contaminant concentrations were simulated by a validated interval dynamic multimedia fugacity model, and different guideline values for the same contaminant were represented as a fuzzy environmental guideline. Then, the probability of violating environmental guideline (Pv) can be determined by comparison between the modeled concentrations and the fuzzy environmental guideline, and the constructed relationship between the Pvs and environmental risk levels was used to assess the environmental risk level. The developed approach was applied to assess the integrated environmental risk at a case study site in China, simulated from 1985 to 2020. Four scenarios were analyzed, including "residential land" and "industrial land" environmental guidelines under "strict" and "loose" strictness. It was found that PAH concentrations will increase steadily over time, with soil found to be the dominant sink. Source emission in soil was the leading input and atmospheric sedimentation was the dominant transfer process. The integrated environmental risks primarily resulted from petroleum spills and coke ovens, while the soil environmental risks came from coal combustion. The developed approach offers an effective tool for quantifying variability and uncertainty in the dynamic multimedia integrated environmental risk assessment and the contaminated site management.

  18. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi

  19. Stochastic margin-based structure learning of Bayesian network classifiers.

    PubMed

    Pernkopf, Franz; Wohlmayr, Michael

    2013-02-01

    The margin criterion for parameter learning in graphical models gained significant impact over the last years. We use the maximum margin score for discriminatively optimizing the structure of Bayesian network classifiers. Furthermore, greedy hill-climbing and simulated annealing search heuristics are applied to determine the classifier structures. In the experiments, we demonstrate the advantages of maximum margin optimized Bayesian network structures in terms of classification performance compared to traditionally used discriminative structure learning methods. Stochastic simulated annealing requires less score evaluations than greedy heuristics. Additionally, we compare generative and discriminative parameter learning on both generatively and discriminatively structured Bayesian network classifiers. Margin-optimized Bayesian network classifiers achieve similar classification performance as support vector machines. Moreover, missing feature values during classification can be handled by discriminatively optimized Bayesian network classifiers, a case where purely discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  20. Stochastic margin-based structure learning of Bayesian network classifiers

    PubMed Central

    Pernkopf, Franz; Wohlmayr, Michael

    2013-01-01

    The margin criterion for parameter learning in graphical models gained significant impact over the last years. We use the maximum margin score for discriminatively optimizing the structure of Bayesian network classifiers. Furthermore, greedy hill-climbing and simulated annealing search heuristics are applied to determine the classifier structures. In the experiments, we demonstrate the advantages of maximum margin optimized Bayesian network structures in terms of classification performance compared to traditionally used discriminative structure learning methods. Stochastic simulated annealing requires less score evaluations than greedy heuristics. Additionally, we compare generative and discriminative parameter learning on both generatively and discriminatively structured Bayesian network classifiers. Margin-optimized Bayesian network classifiers achieve similar classification performance as support vector machines. Moreover, missing feature values during classification can be handled by discriminatively optimized Bayesian network classifiers, a case where purely discriminative classifiers usually require mechanisms to complete unknown feature values in the data first. PMID:24511159

  1. Stochastic approach to correlations beyond the mean field with the Skyrme interaction

    SciTech Connect

    Fukuoka, Y.; Nakatsukasa, T.; Funaki, Y.; Yabana, K.

    2012-10-20

    Large-scale calculation based on the multi-configuration Skyrme density functional theory is performed for the light N=Z even-even nucleus, {sup 12}C. Stochastic procedures and the imaginary-time evolution are utilized to prepare many Slater determinants. Each state is projected on eigenstates of parity and angular momentum. Then, performing the configuration mixing calculation with the Skyrme Hamiltonian, we obtain low-lying energy-eigenstates and their explicit wave functions. The generated wave functions are completely free from any assumption and symmetry restriction. Excitation spectra and transition probabilities are well reproduced, not only for the ground-state band, but for negative-parity excited states and the Hoyle state.

  2. Stochastic investigation of two-dimensional cross sections of rocks based on the climacogram

    NASA Astrophysics Data System (ADS)

    Kalamioti, Anna; Dimitriadis, Panayiotis; Tzouka, Katerina; Lerias, Eleutherios; Koutsoyiannis, Demetris

    2016-04-01

    The statistical properties of soil and rock formations are essential for the characterization of the porous medium geological structure as well as for the prediction of its transport properties in groundwater modelling. We investigate two-dimensional cross sections of rocks in terms of stochastic structure of its morphology quantified by the climacogram (i.e., variance of the averaged process vs. scale). The analysis is based both in microscale and macroscale data, specifically from Scanning Electron Microscope (SEM) pictures and from field photos, respectively. We identify and quantify the stochastic properties with emphasis on the large scale type of decay (exponentially or power type, else known as Hurst-Kolmogorov behaviour). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  3. Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2017-02-01

    This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.

  4. Sparse Regression Based Structure Learning of Stochastic Reaction Networks from Single Cell Snapshot Time Series

    PubMed Central

    Ganscha, Stefan; Claassen, Manfred

    2016-01-01

    Stochastic chemical reaction networks constitute a model class to quantitatively describe dynamics and cell-to-cell variability in biological systems. The topology of these networks typically is only partially characterized due to experimental limitations. Current approaches for refining network topology are based on the explicit enumeration of alternative topologies and are therefore restricted to small problem instances with almost complete knowledge. We propose the reactionet lasso, a computational procedure that derives a stepwise sparse regression approach on the basis of the Chemical Master Equation, enabling large-scale structure learning for reaction networks by implicitly accounting for billions of topology variants. We have assessed the structure learning capabilities of the reactionet lasso on synthetic data for the complete TRAIL induced apoptosis signaling cascade comprising 70 reactions. We find that the reactionet lasso is able to efficiently recover the structure of these reaction systems, ab initio, with high sensitivity and specificity. With only < 1% false discoveries, the reactionet lasso is able to recover 45% of all true reactions ab initio among > 6000 possible reactions and over 102000 network topologies. In conjunction with information rich single cell technologies such as single cell RNA sequencing or mass cytometry, the reactionet lasso will enable large-scale structure learning, particularly in areas with partial network structure knowledge, such as cancer biology, and thereby enable the detection of pathological alterations of reaction networks. We provide software to allow for wide applicability of the reactionet lasso. PMID:27923064

  5. A new stochastic control approach to multireservoir operation problems with uncertain forecasts

    NASA Astrophysics Data System (ADS)

    Wang, Jinwen

    2010-02-01

    This paper presents a new stochastic control approach (NSCA) for determining the optimal weekly operation policy of multiple hydroplants. This originally involves solving an optimization problem at the beginning of each week to derive the optimal storage trajectory that maximizes the energy production during a study horizon plus the water value stored at the end of the study horizon. Then the derived optimal storage at the end of the upcoming week is used as the target to operate the reservoir. This paper describes the inflow as a forecast-dependent white noise and demonstrates that the optimal target storage at the end of the upcoming week can be equivalently determined by solving a real-time model. The real-time model derives the optimal storage trajectory that converges to the optimal annually cycling storage trajectory (OACST) at the end of a real-time horizon, with the OACST determined by solving an annually cycling model. The numerical examples with one, two, three, and seven reservoirs are studied in detail. For systems of no more than three reservoirs, the NSCA obtains results similar to those obtained with SDP even using a simple inflow forecasting model AR (1). A hypothetical numerical example with 21 reservoirs is also tested. The NSCA is conceptually superior to the other approaches for problems that are computationally intractable due to the number of reservoirs in the system.

  6. Photosynthetic electron transfer controlled by protein relaxation: analysis by Langevin stochastic approach.

    PubMed Central

    Cherepanov, D A; Krishtalik, L I; Mulkidjanian, A Y

    2001-01-01

    Relaxation processes in proteins range in time from picoseconds to seconds. Correspondingly, biological electron transfer (ET) could be controlled by slow protein relaxation. We used the Langevin stochastic approach to describe this type of ET dynamics. Two different types of kinetic behavior were revealed, namely: oscillating ET (that could occur at picoseconds) and monotonically relaxing ET. On a longer time scale, the ET dynamics can include two different kinetic components. The faster one reflects the initial, nonadiabatic ET, whereas the slower one is governed by the medium relaxation. We derived a simple relation between the relative extents of these components, the change in the free energy (DeltaG), and the energy of the slow reorganization Lambda. The rate of ET was found to be determined by slow relaxation at -DeltaG < or = Lambda. The application of the developed approach to experimental data on ET in the bacterial photosynthetic reaction centers allowed a quantitative description of the oscillating features in the primary charge separation and yielded values of Lambda for the slower low-exothermic ET reactions. In all cases but one, the obtained estimates of Lambda varied in the range of 70-100 meV. Because the vast majority of the biological ET reactions are only slightly exothermic (DeltaG > or = -100 meV), the relaxationally controlled ET is likely to prevail in proteins. PMID:11222272

  7. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    PubMed

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  8. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    PubMed

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  9. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    SciTech Connect

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  10. Passivity-based sliding mode control for a polytopic stochastic differential inclusion system.

    PubMed

    Liu, Leipo; Fu, Zhumu; Song, Xiaona

    2013-11-01

    Passivity-based sliding mode control for a polytopic stochastic differential inclusion (PSDI) system is considered. A control law is designed such that the reachability of sliding motion is guaranteed. Moreover, sufficient conditions for mean square asymptotic stability and passivity of sliding mode dynamics are obtained by linear matrix inequalities (LMIs). Finally, two examples are given to illustrate the effectiveness of the proposed method.

  11. Stochastic quantum Zeno-based detection of noise correlations

    NASA Astrophysics Data System (ADS)

    Müller, Matthias M.; Gherardini, Stefano; Caruso, Filippo

    2016-12-01

    A system under constant observation is practically freezed to the measurement subspace. If the system driving is a random classical field, the survival probability of the system in the subspace becomes a random variable described by the Stochastic Quantum Zeno Dynamics (SQZD) formalism. Here, we study the time and ensemble average of this random survival probability and demonstrate how time correlations in the noisy environment determine whether the two averages do coincide or not. These environment time correlations can potentially generate non-Markovian dynamics of the quantum system depending on the structure and energy scale of the system Hamiltonian. We thus propose a way to detect time correlations of the environment by coupling a quantum probe system to it and observing the survival probability of the quantum probe in a measurement subspace. This will further contribute to the development of new schemes for quantum sensing technologies, where nanodevices may be exploited to image external structures or biological molecules via the surface field they generate.

  12. Stochastic quantum Zeno-based detection of noise correlations

    PubMed Central

    Müller, Matthias M.; Gherardini, Stefano; Caruso, Filippo

    2016-01-01

    A system under constant observation is practically freezed to the measurement subspace. If the system driving is a random classical field, the survival probability of the system in the subspace becomes a random variable described by the Stochastic Quantum Zeno Dynamics (SQZD) formalism. Here, we study the time and ensemble average of this random survival probability and demonstrate how time correlations in the noisy environment determine whether the two averages do coincide or not. These environment time correlations can potentially generate non-Markovian dynamics of the quantum system depending on the structure and energy scale of the system Hamiltonian. We thus propose a way to detect time correlations of the environment by coupling a quantum probe system to it and observing the survival probability of the quantum probe in a measurement subspace. This will further contribute to the development of new schemes for quantum sensing technologies, where nanodevices may be exploited to image external structures or biological molecules via the surface field they generate. PMID:27941889

  13. Beam Based Measurements for Stochastic Cooling Systems at Fermilab

    SciTech Connect

    Lebedev, V.A.; Pasquinelli, R.J.; Werkema, S.J.; /Fermilab

    2007-09-13

    Improvement of antiproton stacking rates has been pursued for the last twenty years at Fermilab. The last twelve months have been dedicated to improving the computer model of the Stacktail system. The production of antiprotons encompasses the use of the entire accelerator chain with the exception of the Tevatron. In the Antiproton Source two storage rings, the Debuncher and Accumulator are responsible for the accumulation of antiprotons in quantities that can exceed 2 x 10{sup 12}, but more routinely, stacks of 5 x 10{sup 11} antiprotons are accumulated before being transferred to the Recycler ring. Since the beginning of this recent enterprise, peak accumulation rates have increased from 2 x 10{sup 11} to greater than 2.3 x 10{sup 11} antiprotons per hour. A goal of 3 x 10{sup 11} per hour has been established. Improvements to the stochastic cooling systems are but a part of this current effort. This paper will discuss Stacktail system measurements and experienced system limitations.

  14. Stochastic simulation of soil particle-size curves in heterogeneous aquifer systems through a Bayes space approach

    NASA Astrophysics Data System (ADS)

    Menafoglio, A.; Guadagnini, A.; Secchi, P.

    2016-08-01

    We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.

  15. A stochastic vision-based model inspired by zebrafish collective behaviour in heterogeneous environments

    PubMed Central

    Collignon, Bertrand; Séguret, Axel; Halloy, José

    2016-01-01

    Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173

  16. Random-walk-based stochastic modeling of three-dimensional fiber systems.

    PubMed

    Altendorf, Hellen; Jeulin, Dominique

    2011-04-01

    For the simulation of fiber systems, there exist several stochastic models: systems of straight nonoverlapping fibers, systems of overlapping bending fibers, or fiber systems created by sedimentation. However, there is a lack of models providing dense, nonoverlapping fiber systems with a given random orientation distribution and a controllable level of bending. We introduce a new stochastic model in this paper that generalizes the force-biased packing approach to fibers represented as chains of balls. The starting configuration is modeled using random walks, where two parameters in the multivariate von Mises-Fisher orientation distribution control the bending. The points of the random walk are associated with a radius and the current orientation. The resulting chains of balls are interpreted as fibers. The final fiber configuration is obtained as an equilibrium between repulsion forces avoiding crossing fibers and recover forces ensuring the fiber structure. This approach provides high volume fractions up to 72.0075%.

  17. Stochastic goal programming based groundwater remediation management under human-health-risk uncertainty.

    PubMed

    Li, Jing; He, Li; Lu, Hongwei; Fan, Xing

    2014-08-30

    An optimal design approach for groundwater remediation is developed through incorporating numerical simulation, health risk assessment, uncertainty analysis and nonlinear optimization within a general framework. Stochastic analysis and goal programming are introduced into the framework to handle uncertainties in real-world groundwater remediation systems. Carcinogenic risks associated with remediation actions are further evaluated at four confidence levels. The differences between ideal and predicted constraints are minimized by goal programming. The approach is then applied to a contaminated site in western Canada for creating a set of optimal remediation strategies. Results from the case study indicate that factors including environmental standards, health risks and technical requirements mutually affected and restricted themselves. Stochastic uncertainty existed in the entire process of remediation optimization, which should to be taken into consideration in groundwater remediation design.

  18. Stochastic master equation approach for analysis of remote entanglement with Josephson parametric converter amplifier

    NASA Astrophysics Data System (ADS)

    Silveri, M.; Zalys-Geller, E.; Hatridge, M.; Leghtas, Z.; Devoret, M. H.; Girvin, S. M.

    2015-03-01

    In the remote entanglement process, two distant stationary qubits are entangled with separate flying qubits and the which-path information is erased from the flying qubits by interference effects. As a result, an observer cannot tell from which of the two sources a signal came and the probabilistic measurement process generates perfect heralded entanglement between the two signal sources. Notably, the two stationary qubits are spatially separated and there is no direct interaction between them. We study two transmon qubits in superconducting cavities connected to a Josephson Parametric Converter (JPC). The qubit information is encoded in the traveling wave leaking out from each cavity. Remarkably, the quantum-limited phase-preserving amplification of two traveling waves provided by the JPC can work as a which-path information eraser. By using a stochastic master approach we demonstrate the probabilistic production of heralded entangled states and that unequal qubit-cavity pairs can be made indistinguishable by simple engineering of driving fields. Additionally, we will derive measurement rates, measurement optimization strategies and discuss the effects of finite amplification gain, cavity losses, and qubit relaxations and dephasing. Work supported by IARPA, ARO and NSF.

  19. SLFP: A stochastic linear fractional programming approach for sustainable waste management

    SciTech Connect

    Zhu, H.; Huang, G.H.

    2011-12-15

    Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.

  20. A Stochastic Cellular Automata Approach to Population Dynamics of Cells in a HIV Immune Response Model

    NASA Astrophysics Data System (ADS)

    Pandey, Ras B.

    1998-03-01

    A stochastic cellular automata (SCA) approach is introduced to study the growth and decay of cellular population in an immune response model relevant to HIV. Four cell types are considered: macrophages (M), helper cells (H), cytotoxic cells (C), and viral infected cells (V). Mobility of the cells is introduced and viral mutation is considered probabilistically. In absence of mutation, the population of the host cells, helper (N_H) and cytotxic (N_C) cells in particular, dominates over the viral population (N_V), i.e., N_H, NC > N_V, the immune system wins over the viral infection. Variation of cellular population with time exhibits oscillations. The amplitude of oscillations in variation of N_H, NC and NV with time decreases at high mobility even at low viral mutation; the rate of viral growth is nonmonotonic with NV > N_H, NC in the long time regime. The viral population is much higher than that of the host cells at higher mutation rate, a possible cause of AIDS.

  1. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  2. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  3. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  4. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  5. Comparison of stochastic parametrization approaches in a single-column model.

    PubMed

    Ball, Michael A; Plant, Robert S

    2008-07-28

    We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single-column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared with deterministic ensembles describing initial condition uncertainty and also with combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.

  6. Stochastic segmentation models for array-based comparative genomic hybridization data analysis.

    PubMed

    Lai, Tze Leung; Xing, Haipeng; Zhang, Nancy

    2008-04-01

    Array-based comparative genomic hybridization (array-CGH) is a high throughput, high resolution technique for studying the genetics of cancer. Analysis of array-CGH data typically involves estimation of the underlying chromosome copy numbers from the log fluorescence ratios and segmenting the chromosome into regions with the same copy number at each location. We propose for the analysis of array-CGH data, a new stochastic segmentation model and an associated estimation procedure that has attractive statistical and computational properties. An important benefit of this Bayesian segmentation model is that it yields explicit formulas for posterior means, which can be used to estimate the signal directly without performing segmentation. Other quantities relating to the posterior distribution that are useful for providing confidence assessments of any given segmentation can also be estimated by using our method. We propose an approximation method whose computation time is linear in sequence length which makes our method practically applicable to the new higher density arrays. Simulation studies and applications to real array-CGH data illustrate the advantages of the proposed approach.

  7. Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.

    PubMed

    Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason

    2017-03-21

    Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a non-linear mixed effects model in which the random effects associated with absorption represent a Wiener process. The present work compares, 1) stochastic deconvolution, and 2) numerical deconvolution, using clinical pharmacokinetic data generated for an IVIVC study of extended release (ER) formulations of a BCS class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (Fabs) versus time profiles when supplied with exactly the same externally-determined unit impulse response parameters. In a separate analysis a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting Fabs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis.

  8. Entity-based Stochastic Analysis of Search Results for Query Expansion and Results Re-Ranking

    DTIC Science & Technology

    2015-11-20

    based on named-entity recognition applied in a set of search re- sults, and on a graph of documents and identified entities that is constructed...dynamically a graph of documents and entities, and then to analyze it stochastically using a Random Walk- based method. Specifically, we model the search...process as a random walker of the graph defined by the top documents returned by a search system and the entities identified in these documents. For

  9. Random Walk-Based Solution to Triple Level Stochastic Point Location Problem.

    PubMed

    Jiang, Wen; Huang, De-Shuang; Li, Shenghong

    2016-06-01

    This paper considers the stochastic point location (SPL) problem as a learning mechanism trying to locate a point on a real line via interacting with a random environment. Compared to the stochastic environment in the literatures that confines the learning mechanism to moving in two directions, i.e., left or right, this paper introduces a general triple level stochastic environment which not only tells the learning mechanism to go left or right, but also informs it to stay unmoved. It is easy to understand, as we will prove in this paper, that the environment reported in the previous literatures is just a special case of the triple level environment. And a new learning algorithm, named as random walk-based triple level learning algorithm, is proposed to locate an unknown point under this new type of environment. In order to examine the performance of this algorithm, we divided the triple level SPL problems into four distinguished scenarios by the properties of the unknown point and the stochastic environment, and proved that even under the triple level nonstationary environment and the convergence condition having not being satisfied for some time, which are rarely considered in existing SPL problems, the proposed learning algorithm is still working properly whenever the unknown point is static or evolving with time. Extensive experiments validate our theoretical analyses and demonstrate that the proposed learning algorithms are quite effective and efficient.

  10. Dynamic load identification for stochastic structures based on Gegenbauer polynomial approximation and regularization method

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Sun, Xingsheng; Han, Xu; Jiang, Chao; Yu, Dejie

    2015-05-01

    Based on the Gegenbauer polynomial expansion theory and regularization method, an analytical method is proposed to identify dynamic loads acting on stochastic structures. Dynamic loads are expressed as functions of time and random parameters in time domain and the forward model of dynamic load identification is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions of system. Random parameters are approximated through the random variables with λ-probability density function (PDFs) or their derivative PDFs. For this kind of random variables, Gegenbauer polynomial expansion is the unique correct choice to transform the problem of load identification for a stochastic structure into its equivalent deterministic system. Just via its equivalent deterministic system, the load identification problem of a stochastic structure can be solved by any available deterministic methods. With measured responses containing noise, the improved regularization operator is adopted to overcome the ill-posedness of load reconstruction and to obtain the stable and approximate solutions of certain inverse problems and the valid assessments of the statistics of identified loads. Numerical simulations demonstrate that with regard to stochastic structures, the identification and assessment of dynamic loads are achieved steadily and effectively by the presented method.

  11. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  12. Modifying stochastic slip distributions based on dynamic simulations for use in probabilistic tsunami hazard evaluation.

    NASA Astrophysics Data System (ADS)

    Murphy, Shane; Scala, Antonio; Lorito, Stefano; Herrero, Andre; Festa, Gaetano; Nielsen, Stefan; Trasatti, Elisa; Tonini, Roberto; Romano, Fabrizio; Molinari, Irene

    2016-04-01

    Stochastic slip modelling based on general scaling features with uniform slip probability over the fault plane is commonly employed in tsunami and seismic hazard. However, dynamic rupture effects driven by specific fault geometry and frictional conditions can potentially control the slip probability. Unfortunately dynamic simulations can be computationally intensive, preventing their extensive use for hazard analysis. The aim of this study is to produce a computationally efficient stochastic model that incorporates slip features observed in dynamic simulations. Dynamic rupture simulations are performed along a transect representing an average along-depth profile on the Tohoku subduction interface. The surrounding media, effective normal stress and friction law are simplified. Uncertainty in the nucleation location and pre-stress distribution are accounted for by using randomly located nucleation patches and stochastic pre-stress distributions for 500 simulations. The 1D slip distributions are approximated as moment magnitudes on the fault plane based on empirical scaling laws with the ensemble producing a magnitude range of 7.8 - 9.6. To measure the systematic spatial slip variation and its dependence on earthquake magnitude we introduce the concept of the Slip Probability density Function (SPF). We find that while the stochastic SPF is magnitude invariant, the dynamically derived SPF is magnitude-dependent and shows pronounced slip amplification near the surface for M > 8.6 events. To incorporate these dynamic features in the stochastic source models, we sub-divide the dynamically derived SPFs into 0.2 magnitude bins and compare them with the stochastic SPF in order to generate a depth and magnitude dependent transfer function. Applying this function to the traditional stochastic slip distribution allows for an approximated but efficient incorporation of regionally specific dynamic features in a modified source model, to be used specifically when a significant

  13. Transaction based approach

    NASA Astrophysics Data System (ADS)

    Hunka, Frantisek; Matula, Jiri

    2017-07-01

    Transaction based approach is utilized in some methodologies in business process modeling. Essential parts of these transactions are human beings. The notion of agent or actor role is usually used for them. The paper on a particular example describes possibilities of Design Engineering Methodology for Organizations (DEMO) and Resource-Event-Agent (REA) methodology. Whereas the DEMO methodology can be regarded as a generic methodology having its foundation in the theory of Enterprise Ontology the REA methodology is regarded as the domain specific methodology and has its origin in accountancy systems. The results of these approaches is that the DEMO methodology captures everything that happens in the reality with a good empirical evidence whereas the REA methodology captures only changes connected with economic events. Economic events represent either change of the property rights to economic resource or consumption or production of economic resources. This results from the essence of economic events and their connection to economic resources.

  14. Desynchronization of stochastically synchronized chemical oscillators

    SciTech Connect

    Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan

    2015-12-15

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.

  15. Quantum stochastic approach for molecule/surface scattering. I. Atom-phonon interactions

    NASA Astrophysics Data System (ADS)

    Bittner, Eric R.; Light, John C.

    1993-11-01

    We present a general, fully quantum mechanical theory for molecule surface scattering at finite temperature within the time dependent Hartree (TDH) factorization. We show the formal manipulations which reduce the total molecule-surface-bath Schrödinger equation into a form which is computationally convenient to use. Under the TDH factorization, the molecular portion of the wavefunction evolves according to a mean-field Hamiltonian which is dependent upon both time and temperature. The temporal and thermal dependence is due to stochastic and dissipative terms that appear in the Heisenberg equations of motion for the phonon operators upon averaging over the bath states. The resulting equations of motion are solved in one dimension self consistently using quantum wavepackets and the discrete variable representation. We compute energy transfer to the phonons as a function of surface temperature and initial energy and compare our results to results obtained using other mean-field models, namely an averaged mean-field model and a fully quantum model based upon a dissipative form of the quantum Liouville equation. It appears that the model presented here provides a better estimation of energy transfer between the molecule and the surface.

  16. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  17. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    PubMed Central

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601

  18. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    PubMed

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-10-15

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  19. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  20. Stochastic cooling in RHIC

    SciTech Connect

    Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.

    2009-05-04

    After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.

  1. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    SciTech Connect

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.

  2. Stochastic asymptotical synchronization of chaotic Markovian jumping fuzzy cellular neural networks with mixed delays and the Wiener process based on sampled-data control

    NASA Astrophysics Data System (ADS)

    Kalpana, M.; Balasubramaniam, P.

    2013-07-01

    We investigate the stochastic asymptotical synchronization of chaotic Markovian jumping fuzzy cellular neural networks (MJFCNNs) with discrete, unbounded distributed delays, and the Wiener process based on sampled-data control using the linear matrix inequality (LMI) approach. The Lyapunov—Krasovskii functional combined with the input delay approach as well as the free-weighting matrix approach is employed to derive several sufficient criteria in terms of LMIs to ensure that the delayed MJFCNNs with the Wiener process is stochastic asymptotical synchronous. Restrictions (e.g., time derivative is smaller than one) are removed to obtain a proposed sampled-data controller. Finally, a numerical example is provided to demonstrate the reliability of the derived results.

  3. Development of the Microstructure Based Stochastic Life Prediction Model

    DTIC Science & Technology

    1993-08-01

    the formulation of preliminary life prediction models[l]. In the microstruc - 3 tural characterization part of the program we have concentrated on the...microstructural models may be needed to describe behavior during different stages of fatigue life3 and intend to integrate them using Markov chain approach. -- B...precipitate phases present in the studied alloy the obtained diffraction patterns were compared with those found in the literature on 7075 and 7050 alloys. The

  4. Ultra-fast data-mining hardware architecture based on stochastic computing.

    PubMed

    Morro, Antoni; Canals, Vincent; Oliver, Antoni; Alomar, Miquel L; Rossello, Josep L

    2015-01-01

    Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area.

  5. Ultra-Fast Data-Mining Hardware Architecture Based on Stochastic Computing

    PubMed Central

    Oliver, Antoni; Alomar, Miquel L.

    2015-01-01

    Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274

  6. Sampling-Based RBDO Using Stochastic Sensitivity and Dynamic Kriging for Broader Army Applications

    DTIC Science & Technology

    2011-08-09

    AND DYNAMIC KRIGING FOR BROADER ARMY APPLICATIONS K.K. Choi, Ikjin Lee, Liang Zhao, and Yoojeong Noh Department of Mechanical and Industrial...Thus, for broader Army applications, a sampling-based RBDO method using surrogate model has been developed recently. The Dynamic Kriging (DKG) method...Uuing Stochastic Sensitivity and Dynamic Kriging for Broader Army Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  7. Research on user behavior authentication model based on stochastic Petri nets

    NASA Astrophysics Data System (ADS)

    Zhang, Chengyuan; Xu, Haishui

    2017-08-01

    A behavioural authentication model based on stochastic Petri net is proposed to meet the randomness, uncertainty and concurrency characteristics of user behaviour. The use of random models in the location, changes, arc and logo to describe the characteristics of a variety of authentication and game relationships, so as to effectively implement the graphical user behaviour authentication model analysis method, according to the corresponding proof to verify the model is valuable.

  8. A joint stochastic-deterministic approach for long-term and short-term modelling of monthly flow rates

    NASA Astrophysics Data System (ADS)

    Stojković, Milan; Kostić, Srđan; Plavšić, Jasna; Prohaska, Stevan

    2017-01-01

    The authors present a detailed procedure for modelling of mean monthly flow time-series using records of the Great Morava River (Serbia). The proposed procedure overcomes a major challenge of other available methods by disaggregating the time series in order to capture the main properties of the hydrologic process in both long-run and short-run. The main assumption of the conducted research is that a time series of monthly flow rates represents a stochastic process comprised of deterministic, stochastic and random components, the former of which can be further decomposed into a composite trend and two periodic components (short-term or seasonal periodicity and long-term or multi-annual periodicity). In the present paper, the deterministic component of a monthly flow time-series is assessed by spectral analysis, whereas its stochastic component is modelled using cross-correlation transfer functions, artificial neural networks and polynomial regression. The results suggest that the deterministic component can be expressed solely as a function of time, whereas the stochastic component changes as a nonlinear function of climatic factors (rainfall and temperature). For the calibration period, the results of the analysis infers a lower value of Kling-Gupta Efficiency in the case of transfer functions (0.736), whereas artificial neural networks and polynomial regression suggest a significantly better match between the observed and simulated values, 0.841 and 0.891, respectively. It seems that transfer functions fail to capture high monthly flow rates, whereas the model based on polynomial regression reproduces high monthly flows much better because it is able to successfully capture a highly nonlinear relationship between the inputs and the output. The proposed methodology that uses a combination of artificial neural networks, spectral analysis and polynomial regression for deterministic and stochastic components can be applied to forecast monthly or seasonal flow rates.

  9. An Observationally-Based Method for Simulating Stochasticity in NWP Model Physics

    NASA Astrophysics Data System (ADS)

    Bao, Jian-Wen; Penland, Cecile; Tulich, Stefan; Pegion, Philip Phil; Whitaker, Jeffrey S.; Michelson, Sara A.; Grell, Evelyn D.

    2017-04-01

    We have developed a method that is more general and suitable for accounting for the model physics uncertainty in ensemble modeling systems based on observations and datasets from large-eddy simulations. The essence of the method is a physically-based stochastic differential equation that can efficiently generate the stochastically-generated skew (SGS) distribution that is commonly seen in the statistics of atmospheric variable properties. A critical objective of this development is to upgrade the current operational algorithms in generating the model-error component of ensemble spread with improved ones that are more process-based and physically sound. The ongoing development involves (i) analyses of observations and dataset output from large-eddy simulations to specify parameters required for generating the SGS distribution, and (ii) implementing and testing the newly developed method in NOAA's GEFS. We will use the stochastic parameterization of convection-induced momentum transport at the subgrid scale to demonstrate the advantage of the newly developed method.

  10. Spatial characterization and prediction of Neanderthal sites based on environmental information and stochastic modelling

    NASA Astrophysics Data System (ADS)

    Maerker, Michael; Bolus, Michael

    2014-05-01

    We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less

  11. Comparison of Two Statistical Approaches to a Solution of the Stochastic Radiative Transfer Equation

    NASA Astrophysics Data System (ADS)

    Kirnos, I. V.; Tarasenkov, M. V.; Belov, V. V.

    2016-04-01

    The method of direct simulation of photon trajectories in a stochastic medium is compared with the method of closed equations suggested by G. A. Titov. A comparison is performed for the model of the stochastic medium in the form of a cloudy field of constant thickness comprising rectangular clouds whose boundaries are determined by a stationary Poisson flow of points. It is demonstrated that the difference between the calculated results can reach 20-30%; however, in some cases (for some sets of initial data) the difference is limited by 5% irrespective of the cloud cover index.

  12. Stochastic Subspace-Based Structural Identification and Damage Detection —APPLICATION to the Steel-Quake Benchmark

    NASA Astrophysics Data System (ADS)

    Mevel, L.; Basseville, M.; Goursat, M.

    2003-01-01

    Numerical results from the application of new stochastic subspace-based structural identification and damage detection methods to the steel-quake structure are discussed. Particular emphasis is put on structural model identification, for which we display some modeshapes.

  13. A stochastic model of the processes in PCR based amplification of STR DNA in forensic applications.

    PubMed

    Weusten, Jos; Herbergs, Jos

    2012-01-01

    In forensic DNA profiling use is made of the well-known technique of PCR. When the amount of DNA is high, generally unambiguous profiles can be obtained, but for low copy number DNA stochastic effects can play a major role. In order to shed light on these stochastic effects, we present a simple model for the amplification process. According to the model, three possible things can happen to an individual single DNA strand in each complete cycle: successful amplification, no amplification, or amplification with the introduction of stutter. The model is developed in mathematical terms using a recursive approach: given the numbers of chains at a given cycle, the numbers in the next can be described using a multinomial probability distribution. A full set of recursive relations is derived for the expectations and (co)variances of the number of amplicon chains with no, 1 or 2 stutters. The exact mathematical solutions of this set are given, revealing the development of the expectations and (co)variances as function of the cycle number. The equations reveal that the expected number of amplicon chains without stutter grows exponentially with the cycle number, but for the chains with stutter the relation is more complex. The relative standard deviation on the numbers of chains (coefficient of variation) is inversely proportional to the square root of the expected number of DNA strands entering the amplification. As such, for high copy number DNA the stochastic effects can be ignored, but they play an important role at low concentrations. For the allelic peak, the coefficient of variation rapidly stabilizes after a few cycles, but for the chains with stutter the decrease is more slowly. Further, the ratio of the expected intensity of the stutter peak over that of the allelic peak increases linearly with the number of cycles. Stochastic models, like the one developed in the current paper, can be important in further developing interpretation rules in a Bayesian context

  14. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  15. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  16. The design and testing of a first-order logic-based stochastic modeling language.

    SciTech Connect

    Pless, Daniel J.; Rammohan, Roshan; Chakrabarti, Chayan; Luger, George F.

    2005-06-01

    We have created a logic-based, Turing-complete language for stochastic modeling. Since the inference scheme for this language is based on a variant of Pearl's loopy belief propagation algorithm, we call it Loopy Logic. Traditional Bayesian networks have limited expressive power, basically constrained to finite domains as in the propositional calculus. Our language contains variables that can capture general classes of situations, events and relationships. A first-order language is also able to reason about potentially infinite classes and situations using constructs such as hidden Markov models(HMMs). Our language uses an Expectation-Maximization (EM) type learning of parameters. This has a natural fit with the Loopy Belief Propagation used for inference since both can be viewed as iterative message passing algorithms. We present the syntax and theoretical foundations for our Loopy Logic language. We then demonstrate three examples of stochastic modeling and diagnosis that explore the representational power of the language. A mechanical fault detection example displays how Loopy Logic can model time-series processes using an HMM variant. A digital circuit example exhibits the probabilistic modeling capabilities, and finally, a parameter fitting example demonstrates the power for learning unknown stochastic values.

  17. Stochastic assessment of climate impacts on hydrology and geomorphology of semiarid headwater basins using a physically based model

    NASA Astrophysics Data System (ADS)

    Francipane, A.; Fatichi, S.; Ivanov, V. Y.; Noto, L. V.

    2015-03-01

    Hydrologic and geomorphic responses of watersheds to changes in climate are difficult to assess due to projection uncertainties and nonlinearity of the processes that are involved. Yet such assessments are increasingly needed and call for mechanistic approaches within a probabilistic framework. This study employs an integrated hydrology-geomorphology model, the Triangulated Irregular Network-based Real-time Integrated Basin Simulator (tRIBS)-Erosion, to analyze runoff and erosion sensitivity of seven semiarid headwater basins to projected climate conditions. The Advanced Weather Generator is used to produce two climate ensembles representative of the historic and future climate conditions for the Walnut Gulch Experimental Watershed located in the southwest U.S. The former ensemble incorporates the stochastic variability of the observed climate, while the latter includes the stochastic variability and the uncertainty of multimodel climate change projections. The ensembles are used as forcing for tRIBS-Erosion that simulates runoff and sediment basin responses leading to probabilistic inferences of future changes. The results show that annual precipitation for the area is generally expected to decrease in the future, with lower hourly intensities and similar daily rates. The smaller hourly rainfall generally results in lower mean annual runoff. However, a non-negligible probability of runoff increase in the future is identified, resulting from stochastic combinations of years with low and high runoff. On average, the magnitudes of mean and extreme events of sediment yield are expected to decrease with a very high probability. Importantly, the projected variability of annual sediment transport for the future conditions is comparable to that for the historic conditions, despite the fact that the former account for a much wider range of possible climate "alternatives." This result demonstrates that the historic natural climate variability of sediment yield is already so

  18. Poisson-Vlasov in a strong magnetic field: A stochastic solution approach

    SciTech Connect

    Vilela Mendes, R.

    2010-04-15

    Stochastic solutions are obtained for the Maxwell-Vlasov equation in the approximation where magnetic field fluctuations are neglected and the electrostatic potential is used to compute the electric field. This is a reasonable approximation for plasmas in a strong external magnetic field. Both Fourier and configuration space solutions are constructed.

  19. Fast Nonparametric Density-Based Clustering of Large Data Sets Using a Stochastic Approximation Mean-Shift Algorithm

    PubMed Central

    Baran, Andrea

    2016-01-01

    Mean-shift is an iterative procedure often used as a nonparametric clustering algorithm that defines clusters based on the modal regions of a density function. The algorithm is conceptually appealing and makes assumptions neither about the shape of the clusters nor about their number. However, with a complexity of O(n2) per iteration, it does not scale well to large data sets. We propose a novel algorithm which performs density-based clustering much quicker than mean-shift, yet delivering virtually identical results. This algorithm combines subsampling and a stochastic approximation procedure to achieve a potential complexity of O(n) at each step. Its convergence is established. Its performances are evaluated using simulations and applications to image segmentation, where the algorithm was tens or hundreds of times faster than mean-shift, yet causing negligible amounts of clustering errors. The algorithm can be combined with existing approaches to further accelerate clustering. PMID:28479847

  20. Suboptimal stochastic controller for an n-body spacecraft

    NASA Technical Reports Server (NTRS)

    Larson, V.

    1973-01-01

    The problem is studied of determining a stochastic optimal controller for an n-body spacecraft. The approach used in obtaining the stochastic controller involves the application, interpretation, and combination of advanced dynamical principles and the theoretical aspects of modern control theory. The stochastic controller obtained for a complicated model of a spacecraft uses sensor angular measurements associated with the base body to obtain smoothed estimates of the entire state vector, can be easily implemented, and enables system performance to be significantly improved.

  1. Development of the microstructure based stochastic life prediction models

    NASA Astrophysics Data System (ADS)

    Przystupa, M. A.; Vasudevan, A. K.

    This study explores the methods of incorporating material microstructural characteristics into the fatigue life prediction models based on the results of the microstructural characterizations and fatigue testing of aluminum 7050-T7451 plate alloys. The emphases in the microstructural characterization part of the program are on the identification of the fatigue-relevant microstructural features and on the characterizations of the microstructural gradients. The characterizations are carried out using both the standard and novel techniques such as tessellation, fractal and modified linear intercept methods. The key measurement is determination of the size distributions of the fatigue crack initiating flaws -- they are assumed equal to the extreme value distributions of the micropore and/or constituent particle size distributions measured on the metallographic sections.

  2. A Stochastic Simulation Framework for the Prediction of Strategic Noise Mapping and Occupational Noise Exposure Using the Random Walk Approach

    PubMed Central

    Haron, Zaiton; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri

    2015-01-01

    Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces. PMID:25875019

  3. A stochastic simulation framework for the prediction of strategic noise mapping and occupational noise exposure using the random walk approach.

    PubMed

    Han, Lim Ming; Haron, Zaiton; Yahya, Khairulzan; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri

    2015-01-01

    Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces.

  4. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    PubMed Central

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  5. MRF-based Stochastic Joint Inversion of Hydrological and Geophysical Datasets to Evaluate Aquifer Heterogeneities.

    NASA Astrophysics Data System (ADS)

    Oware, E. K.

    2016-12-01

    Hydrogeophysical assessment of aquifer parameters typically involve sparse noisy measurements coupled with incomplete understanding of the underlying physical process. Thus, recovering a single deterministic solution in light of the largely uncertain inputs is unrealistic. Stochastic imaging (SI) allows the retrieval of multiple equiprobable outcomes that facilitate probabilistic assessment of aquifer properties in a realistic fashion. Representation of prior models is a key aspect of the formulation of SI frameworks. However, higher-order (HO) statistics for representing complex priors in SI are usually borrowed from training images (TIs), which may bias outcomes if the prior hypotheses are inaccurate. A data-driven HO simulation alternative based on Markov random field (MRF) modeling is presented. Here, the modeling of spatial features is guided by potential (Gibbs) energy (PE) minimization. The estimation of the PE encompasses local neighborhood configuration (LNC) and prior statistical constraints. The lower the estimated PE the higher the likelihood of that particular local structure and vice versa. Hence, the LNC component of the PE estimation is designed to promote the recovery of some desired structures while penalizing the retrieval of patterns that are inconsistent with prior expectation. The statistical structure is adaptively inferred from the joint conditional datasets. The reconstruction proceeds in two-steps with the estimation of the lithological structure of the aquifer followed by the simulation of attributes within the identified lithologies. This two-step approach permits the delineation of physically realistic crisp lithological boundaries. The algorithm is demonstrated with a joint inversion of time-lapse concentration and electrical resistivity measurements, in a hypothetical trinary hydrofacies aquifer characterization problem.

  6. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    SciTech Connect

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-04

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  7. Liver segmentation in MRI: A fully automatic method based on stochastic partitions.

    PubMed

    López-Mir, F; Naranjo, V; Angulo, J; Alcañiz, M; Luna, L

    2014-04-01

    There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marker-controlled algorithm. To improve accuracy of selected contours, the gradient of the original image is successfully enhanced by applying a new variant of stochastic watershed. Moreover, a final classifier is performed in order to obtain the final liver mask. Optimal parameters of the method are tuned using a training dataset and then they are applied to the rest of studies (17 datasets). The obtained results (a Jaccard coefficient of 0.91 ± 0.02) in comparison to other methods demonstrate that the new variant of stochastic watershed is a robust tool for automatic segmentation of the liver in MRI.

  8. Synchronization and stochastic resonance of the small-world neural network based on the CPG.

    PubMed

    Lu, Qiang; Tian, Juan

    2014-06-01

    According to biological knowledge, the central nervous system controls the central pattern generator (CPG) to drive the locomotion. The brain is a complex system consisting of different functions and different interconnections. The topological properties of the brain display features of small-world network. The synchronization and stochastic resonance have important roles in neural information transmission and processing. In order to study the synchronization and stochastic resonance of the brain based on the CPG, we establish the model which shows the relationship between the small-world neural network (SWNN) and the CPG. We analyze the synchronization of the SWNN when the amplitude and frequency of the CPG are changed and the effects on the CPG when the SWNN's parameters are changed. And we also study the stochastic resonance on the SWNN. The main findings include: (1) When the CPG is added into the SWNN, there exists parameters space of the CPG and the SWNN, which can make the synchronization of the SWNN optimum. (2) There exists an optimal noise level at which the resonance factor Q gets its peak value. And the correlation between the pacemaker frequency and the dynamical response of the network is resonantly dependent on the noise intensity. The results could have important implications for biological processes which are about interaction between the neural network and the CPG.

  9. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-01

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  10. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  11. Fault prediction for nonlinear stochastic system with incipient faults based on particle filter and nonlinear regression.

    PubMed

    Ding, Bo; Fang, Huajing

    2017-03-31

    This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system.

  12. A Path Integral Approach to Option Pricing with Stochastic Volatility: Some Exact Results

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.

    1997-12-01

    The Black-Scholes formula for pricing options on stocks and other securities has been generalized by Merton and Garman to the case when stock volatility is stochastic. The derivation of the price of a security derivative with stochastic volatility is reviewed starting from the first principles of finance. The equation of Merton and Garman is then recast using the path integration technique of theoretical physics. The price of the stock option is shown to be the analogue of the Schrödinger wavefunction of quantum mechanics and the exact Hamiltonian and Lagrangian of the system is obtained. The results of Hull and White are generalized to the case when stock price and volatility have non-zero correlation. Some exact results for pricing stock options for the general correlated case are derived.

  13. Quantum-trajectory approach to the stochastic thermodynamics of a forced harmonic oscillator.

    PubMed

    Horowitz, Jordan M

    2012-03-01

    I formulate a quantum stochastic thermodynamics for the quantum trajectories of a continuously monitored forced harmonic oscillator coupled to a thermal reservoir. Consistent trajectory-dependent definitions are introduced for work, heat, and entropy, through engineering the thermal reservoir from a sequence of two-level systems. Within this formalism the connection between irreversibility and entropy production is analyzed and confirmed by proving a detailed fluctuation theorem for quantum trajectories. Finally, possible experimental verifications are discussed.

  14. Stochastic differential games with inside information

    NASA Astrophysics Data System (ADS)

    Draouil, Olfa; Øksendal, Bernt

    2016-08-01

    We study stochastic differential games of jump diffusions, where the players have access to inside information. Our approach is based on anticipative stochastic calculus, white noise, Hida-Malliavin calculus, forward integrals and the Donsker delta functional. We obtain a characterization of Nash equilibria of such games in terms of the corresponding Hamiltonians. This is used to study applications to insider games in finance, specifically optimal insider consumption and optimal insider portfolio under model uncertainty.

  15. Receptance-based structural health monitoring approach for bridge structures

    NASA Astrophysics Data System (ADS)

    Jang, S. A.; Spencer, B. F., Jr.

    2009-03-01

    A number of structural health monitoring strategies have been proposed recently that can be implemented in smart sensor networks. Many are based on changes in the experimentally determined flexibility matrix for the structure under consideration. However, the flexibility matrix contains only static information; much richer information is potentially available by considering the dynamic flexibility, or receptance, of the structure. Recently, the stochastic dynamic DLV method was proposed based on the changes in the dynamic flexibility matrix employing centrally collected output-only measurements. This paper extends the stochastic dynamic DLV method so that it can be implemented on a decentralized network of smart sensors. New damage indices are derived that provide robustness estimates of damage location. The smart sensor network is emulated with wired sensors to demonstrate the potential of the proposed method. The efficacy of the proposed approach is demonstrated experimentally using a model truss structure.

  16. Development of a censored modelling approach for stochastic estimation of rainfall extremes at fine temporal scales

    NASA Astrophysics Data System (ADS)

    Cross, David; Onof, Christian; Bernardara, Pietro

    2016-04-01

    With the COP21 drawing to a close in December 2015, storms Desmond, Eva and Frank which swept across the UK and Ireland causing widespread flooding and devastation have acted as a timely reminder of the need for reliable estimation of rainfall extremes in a changing climate. The frequency and intensity of rainfall extremes are predicted to increase in the UK under anthropogenic climate change, and it is notable that the UK's 24 hour rainfall record of 316mm set in Seathwaite, Cumbria in 2009 was broken on the 5 December 2015 with 341mm by storm Desmond at Honister Pass also in Cumbria. Immediate analysis of the latter by the Centre for Ecology and Hydrology (UK) on the 8 December 2015 estimated that this is approximately equivalent to a 1300 year return period event (Centre for Ecology & Hydrology, 2015). Rainfall extremes are typically estimated using extreme value analysis and intensity duration frequency curves. This study investigates the potential for using stochastic rainfall simulation with mechanistic rectangular pulse models for estimation of extreme rainfall. These models have been used since the late 1980s to generate synthetic rainfall time-series at point locations for scenario analysis in hydrological studies and climate impact assessment at the catchment scale. Routinely they are calibrated to the full historical hyetograph and used for continuous simulation. However, their extremal performance is variable with a tendency to underestimate short duration (hourly and sub-hourly) rainfall extremes which are often associated with heavy convective rainfall in temporal climates such as the UK. Focussing on hourly and sub-hourly rainfall, a censored modelling approach is proposed in which rainfall below a low threshold is set to zero prior to model calibration. It is hypothesised that synthetic rainfall time-series are poor at estimating extremes because the majority of the training data are not representative of the climatic conditions which give rise to

  17. [Stewart's acid-base approach].

    PubMed

    Funk, Georg-Christian

    2007-01-01

    In addition to paCO(2), Stewart's acid base model takes into account the influence of albumin, inorganic phosphate, electrolytes and lactate on acid-base equilibrium. It allows a comprehensive and quantitative analysis of acid-base disorders. Particularly simultaneous and mixed metabolic acid-base disorders, which are common in critically ill patients, can be assessed. Stewart's approach is therefore a valuable tool in addition to the customary acid-base approach based on bicarbonate or base excess. However, some chemical aspects of Stewart's approach remain controversial.

  18. FEAMAC-CARES Software Coupling Development Effort for CMC Stochastic-Strength-Based Damage Simulation

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  19. Definition of scarcity-based water pricing policies through hydro-economic stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2014-05-01

    One of the greatest current issues in integrated water resources management is to find and apply efficient and flexible management policies. Efficient management is needed to deal with increased water scarcity and river basin closure. Flexible policies are required to handle the stochastic nature of the water cycle. Scarcity-based pricing policies are one of the most promising alternatives, which deal not only with the supply costs, but also consider the opportunity costs associated with the allocation of water. The opportunity cost of water, which varies dynamically with space and time according to the imbalances between supply and demand, can be assessed using hydro-economic models. This contribution presents a procedure to design a pricing policy based on hydro-economic modelling and on the assessment of the Marginal Resource Opportunity Cost (MROC). Firstly, MROC time series associated to the optimal operation of the system are derived from a stochastic hydro-economic model. Secondly, these MROC time series must be post-processed in order to combine the different space-and-time MROC values into a single generalized indicator of the marginal opportunity cost of water. Finally, step scarcity-based pricing policies are determined after establishing a relationship between the MROC and the corresponding state of the system at the beginning of the time period (month). The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series and four agricultural demand sites currently managed using historical (XIVth century) rights. A hydro-economic model of the system has been built using stochastic dynamic programming. A reoptimization procedure is then implemented using SDP-derived benefit-to-go functions and historical flows to produce the time series of MROC values. MROC values are then aggregated and a statistical analysis is carried out to define (i) pricing policies and (ii) the relationship between MROC and

  20. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  1. FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna

    2016-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  2. A coupled stochastic inverse/sharp interface seawater intrusion approach for coastal aquifers under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Merigó, José M.; Xu, Yejun

    2016-09-01

    This paper presents an alternative approach to deal with seawater intrusion problems, that overcomes some of the limitations of previous works, by coupling the well-known SWI2 package for MODFLOW with a stochastic inverse model named GC method. On the one hand, the SWI2 allows a vertically integrated variable-density groundwater flow and seawater intrusion in coastal multi-aquifer systems, and a reduction in number of required model cells and the elimination of the need to solve the advective-dispersive transport equation, which leads to substantial model run-time savings. On the other hand, the GC method allows dealing with groundwater parameter uncertainty by constraining stochastic simulations to flow and mass transport data (i.e., hydraulic conductivity, freshwater heads, saltwater concentrations and travel times) and also to secondary information obtained from expert judgment or geophysical surveys, thus reducing uncertainty and increasing reliability in meeting the environmental standards. The methodology has been successfully applied to a transient movement of the freshwater-seawater interface in response to changing freshwater inflow in a two-aquifer coastal aquifer system, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques. The approach also allows partially overcoming the neglected diffusion and dispersion processes after the conditioning process since the uncertainty is reduced and results are closer to available data.

  3. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  4. A multiplier-based method of generating stochastic areal rainfall from point rainfalls

    NASA Astrophysics Data System (ADS)

    Ndiritu, J. G.

    Catchment modelling for water resources assessment is still mainly based on rain gauge measurements as these are more easily available and cover longer periods than radar and satellite-based measurements. Rain gauges however measure the rain falling on an extremely small proportion of the catchment and the areal rainfall obtained from these point measurements are consequently substantially uncertain. These uncertainties in areal rainfall estimation are generally ignored and the need to assess their impact on catchment modelling and water resources assessment is therefore imperative. A method that stochastically generates daily areal rainfall from point rainfall using multiplicative perturbations as a means of dealing with these uncertainties is developed and tested on the Berg catchment in the Western Cape of South Africa. The differences in areal rainfall obtained by alternately omitting some of the rain gauges are used to obtain a population of plausible multiplicative perturbations. Upper bounds on the applicable perturbations are set to prevent the generation of unrealistically large rainfall and to obtain unbiased stochastic rainfall. The perturbations within the set bounds are then fitted into probability density functions to stochastically generate the perturbations to impose on areal rainfall. By using 100 randomly-initialized calibrations of the AWBM catchment model and Sequent Peak Analysis, the effects of incorporating areal rainfall uncertainties on storage-yield-reliability analysis are assessed. Incorporating rainfall uncertainty is found to reduce the required storage by up to 20%. Rainfall uncertainty also increases flow-duration variability considerably and reduces the median flow-duration values by an average of about 20%.

  5. Stochastic Analysis of Waterhammer and Applications in Reliability-Based Structural Design for Hydro Turbine Penstocks

    SciTech Connect

    Zhang, Qin Fen; Karney, Professor Byran W.; Suo, Prof. Lisheng; Colombo, Dr. Andrew

    2011-01-01

    Abstract: The randomness of transient events, and the variability in factors which influence the magnitudes of resultant pressure fluctuations, ensures that waterhammer and surges in a pressurized pipe system are inherently stochastic. To bolster and improve reliability-based structural design, a stochastic model of transient pressures is developed for water conveyance systems in hydropower plants. The statistical characteristics and probability distributions of key factors in boundary conditions, initial states and hydraulic system parameters are analyzed based on a large record of observed data from hydro plants in China; and then the statistical characteristics and probability distributions of annual maximum waterhammer pressures are simulated using Monte Carlo method and verified by the analytical probabilistic model for a simplified pipe system. In addition, the characteristics (annual occurrence, sustaining period and probability distribution) of hydraulic loads for both steady and transient states are discussed. Illustrating with an example of penstock structural design, it is shown that the total waterhammer pressure should be split into two individual random variable loads: the steady/static pressure and the waterhammer pressure rise during transients; and that different partial load factors should be applied to each individual load to reflect its unique physical and stochastic features. Particularly, the normative load (usually the unfavorable value at 95-percentage point) for steady/static hydraulic pressure should be taken from the probability distribution of its maximum values during the pipe's design life, while for waterhammer pressure rise, as the second variable load, the probability distribution of its annual maximum values is used to determine its normative load.

  6. Neural network-based finite horizon stochastic optimal control design for nonlinear networked control systems.

    PubMed

    Xu, Hao; Jagannathan, Sarangapani

    2015-03-01

    The stochastic optimal control of nonlinear networked control systems (NNCSs) using neuro-dynamic programming (NDP) over a finite time horizon is a challenging problem due to terminal constraints, system uncertainties, and unknown network imperfections, such as network-induced delays and packet losses. Since the traditional iteration or time-based infinite horizon NDP schemes are unsuitable for NNCS with terminal constraints, a novel time-based NDP scheme is developed to solve finite horizon optimal control of NNCS by mitigating the above-mentioned challenges. First, an online neural network (NN) identifier is introduced to approximate the control coefficient matrix that is subsequently utilized in conjunction with the critic and actor NNs to determine a time-based stochastic optimal control input over finite horizon in a forward-in-time and online manner. Eventually, Lyapunov theory is used to show that all closed-loop signals and NN weights are uniformly ultimately bounded with ultimate bounds being a function of initial conditions and final time. Moreover, the approximated control input converges close to optimal value within finite time. The simulation results are included to show the effectiveness of the proposed scheme.

  7. Cost inefficiency in Washington hospitals: a stochastic frontier approach using panel data.

    PubMed

    Li, T; Rosenman, R

    2001-06-01

    We analyze a sample of Washington State hospitals with a stochastic frontier panel data model, specifying the cost function as a generalized Leontief function which, according to a Hausman test, performs better in this case than the translog form. A one-stage FGLS estimation procedure which directly models the inefficiency effects improves the efficiency of our estimates. We find that hospitals with higher casemix indices or more beds are less efficient while for-profit hospitals and those with higher proportion of Medicare patient days are more efficient. Relative to the most efficient hospital, the average hospital is only about 67% efficient.

  8. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  9. A stochastic frontier approach to study the relationship between gastrointestinal nematode infections and technical efficiency of dairy farms.

    PubMed

    van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes

    2014-01-01

    The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and

  10. ANALYSIS OF VARIANCE-BASED MIXED MULTISCALE FINITE ELEMENT METHOD AND APPLICATIONS IN STOCHASTIC TWO-PHASE FLOWS

    SciTech Connect

    Wei, Jia; Lin, Guang; Jiang, Lijian; Efendiev, Yalchin

    2014-01-01

    The stochastic partial differential systems have been widely used to model physical processes, where the inputs involve large uncertainties. Flows in random and heterogeneous porous media is one of the cases where the random inputs (e.g., permeability) are often modeled as a stochastic field with high-dimensional random parameters. To treat the high dimensionality and heterogeneity efficiently, model reduction is employed in both stochastic space and physical space. An analysis of variance (ANOVA)-based mixed multiscale finite element method MsFEM) is developed to decompose the high-dimensional stochastic problem into a set of lower-dimensional stochastic subproblems, which require much less computational complexity and significantly reduce the computational cost in stochastic space, and the mixed MsFEM can capture the heterogeneities on a coarse grid to greatly reduce the computational cost in the spatial domain. In addition, to enhance the efficiency of the traditional ANOVA method, an adaptive ANOVA method based on a new adaptive criterion is developed, where the most active dimensions can be selected to greatly reduce the computational cost before conducting ANOVA decomposition. This novel adaptive criterion is based on variance-decomposition method coupled with sparse-grid probabilistic collocation method or multilevel Monte Carlo method. The advantage of this adaptive criterion lies in its much lower computational overhead for identifying the active dimensions and interactions. A number of numerical examples in two-phase stochastic flows are presented and demonstrate the accuracy and performance of the adaptive ANOVA-based mixed Ms-FEM.

  11. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE PAGES

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  12. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    SciTech Connect

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; Newlands, Nathaniel K.

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach to address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.

  13. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  14. A two-stage approach for a multi-objective component assignment problem for a stochastic-flow network

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-03-01

    Many real-life systems, such as computer systems, manufacturing systems and logistics systems, are modelled as stochastic-flow networks (SFNs) to evaluate network reliability. Here, network reliability, defined as the probability that the network successfully transmits d units of data/commodity from an origin to a destination, is a performance indicator of the systems. Network reliability maximization is a particular objective, but is costly for many system supervisors. This article solves the multi-objective problem of reliability maximization and cost minimization by finding the optimal component assignment for SFN, in which a set of multi-state components is ready to be assigned to the network. A two-stage approach integrating Non-dominated Sorting Genetic Algorithm II and simple additive weighting are proposed to solve this problem, where network reliability is evaluated in terms of minimal paths and recursive sum of disjoint products. Several practical examples related to computer networks are utilized to demonstrate the proposed approach.

  15. A Stochastic Hill Climbing Approach for Simultaneous 2D Alignment and Clustering of Cryogenic Electron Microscopy Images.

    PubMed

    Reboul, Cyril F; Bonnet, Frederic; Elmlund, Dominika; Elmlund, Hans

    2016-06-07

    A critical step in the analysis of novel cryogenic electron microscopy (cryo-EM) single-particle datasets is the identification of homogeneous subsets of images. Methods for solving this problem are important for data quality assessment, ab initio 3D reconstruction, and analysis of population diversity due to the heterogeneous nature of macromolecules. Here we formulate a stochastic algorithm for identification of homogeneous subsets of images. The purpose of the method is to generate improved 2D class averages that can be used to produce a reliable 3D starting model in a rapid and unbiased fashion. We show that our method overcomes inherent limitations of widely used clustering approaches and proceed to test the approach on six publicly available experimental cryo-EM datasets. We conclude that, in each instance, ab initio 3D reconstructions of quality suitable for initialization of high-resolution refinement are produced from the cluster centers.

  16. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    USGS Publications Warehouse

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5  Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  17. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    NASA Astrophysics Data System (ADS)

    Dong, Bing; Ren, De-Qing; Zhang, Xi

    2011-08-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  18. Detailed numerical investigation of the dissipative stochastic mechanics based neuron model.

    PubMed

    Güler, Marifi

    2008-10-01

    Recently, a physical approach for the description of neuronal dynamics under the influence of ion channel noise was proposed in the realm of dissipative stochastic mechanics (Güler, Phys Rev E 76:041918, 2007). Led by the presence of a multiple number of gates in an ion channel, the approach establishes a viewpoint that ion channels are exposed to two kinds of noise: the intrinsic noise, associated with the stochasticity in the movement of gating particles between the inner and the outer faces of the membrane, and the topological noise, associated with the uncertainty in accessing the permissible topological states of open gates. Renormalizations of the membrane capacitance and of a membrane voltage dependent potential function were found to arise from the mutual interaction of the two noisy systems. The formalism therein was scrutinized using a special membrane with some tailored properties giving the Rose-Hindmarsh dynamics in the deterministic limit. In this paper, the resultant computational neuron model of the above approach is investigated in detail numerically for its dynamics using time-independent input currents. The following are the major findings obtained. The intrinsic noise gives rise to two significant coexisting effects: it initiates spiking activity even in some range of input currents for which the corresponding deterministic model is quiet and causes bursting in some other range of input currents for which the deterministic model fires tonically. The renormalization corrections are found to augment the above behavioral transitions from quiescence to spiking and from tonic firing to bursting, and, therefore, the bursting activity is found to take place in a wider range of input currents for larger values of the correction coefficients. Some findings concerning the diffusive behavior in the voltage space are also reported.

  19. A many-body field theory approach to stochastic models in population biology.

    PubMed

    Dodd, Peter J; Ferguson, Neil M

    2009-09-01

    Many models used in theoretical ecology, or mathematical epidemiology are stochastic, and may also be spatially-explicit. Techniques from quantum field theory have been used before in reaction-diffusion systems, principally to investigate their critical behavior. Here we argue that they make many calculations easier and are a possible starting point for new approximations. We review the many-body field formalism for Markov processes and illustrate how to apply it to a 'Brownian bug' population model, and to an epidemic model. We show how the master equation and the moment hierarchy can both be written in particularly compact forms. The introduction of functional methods allows the systematic computation of the effective action, which gives the dynamics of mean quantities. We obtain the 1-loop approximation to the effective action for general (space-) translation invariant systems, and thus approximations to the non-equilibrium dynamics of the mean fields. The master equations for spatial stochastic systems normally take a neater form in the many-body field formalism. One can write down the dynamics for generating functional of physically-relevant moments, equivalent to the whole moment hierarchy. The 1-loop dynamics of the mean fields are the same as those of a particular moment-closure.

  20. Reduction of stochastic conductance-based neuron models with time-scales separation.

    PubMed

    Wainrib, Gilles; Thieullen, Michèle; Pakdaman, Khashayar

    2012-04-01

    We introduce a method for systematically reducing the dimension of biophysically realistic neuron models with stochastic ion channels exploiting time-scales separation. Based on a combination of singular perturbation methods for kinetic Markov schemes with some recent mathematical developments of the averaging method, the techniques are general and applicable to a large class of models. As an example, we derive and analyze reductions of different stochastic versions of the Hodgkin Huxley (HH) model, leading to distinct reduced models. The bifurcation analysis of one of the reduced models with the number of channels as a parameter provides new insights into some features of noisy discharge patterns, such as the bimodality of interspike intervals distribution. Our analysis of the stochastic HH model shows that, besides being a method to reduce the number of variables of neuronal models, our reduction scheme is a powerful method for gaining understanding on the impact of fluctuations due to finite size effects on the dynamics of slow fast systems. Our analysis of the reduced model reveals that decreasing the number of sodium channels in the HH model leads to a transition in the dynamics reminiscent of the Hopf bifurcation and that this transition accounts for changes in characteristics of the spike train generated by the model. Finally, we also examine the impact of these results on neuronal coding, notably, reliability of discharge times and spike latency, showing that reducing the number of channels can enhance discharge time reliability in response to weak inputs and that this phenomenon can be accounted for through the analysis of the reduced model.

  1. A Stochastic Approach to Diffeomorphic Point Set Registration with Landmark Constraints.

    PubMed

    Kolesov, Ivan; Lee, Jehoon; Sharp, Gregory; Vela, Patricio; Tannenbaum, Allen

    2016-02-01

    This work presents a deformable point set registration algorithm that seeks an optimal set of radial basis functions to describe the registration. A novel, global optimization approach is introduced composed of simulated annealing with a particle filter based generator function to perform the registration. It is shown how constraints can be incorporated into this framework. A constraint on the deformation is enforced whose role is to ensure physically meaningful fields (i.e., invertible). Further, examples in which landmark constraints serve to guide the registration are shown. Results on 2D and 3D data demonstrate the algorithm's robustness to noise and missing information.

  2. Partial derivative approach for option pricing in a simple stochastic volatility model

    NASA Astrophysics Data System (ADS)

    Montero, M.

    2004-11-01

    We study a market model in which the volatility of the stock may jump at a random time from a fixed value to another fixed value. This model has already been introduced in the literature. We present a new approach to the problem, based on partial differential equations, which gives a different perspective to the issue. Within our framework we can easily consider several forms for the market price of volatility risk, and interpret their financial meaning. We thus recover solutions previously mentioned in the literature as well as obtaining new ones.

  3. A Stochastic Approach to Diffeomorphic Point Set Registration With Landmark Constraints

    PubMed Central

    Kolesov, Ivan; Lee, Jehoon; Sharp, Gregory; Vela, Patricio; Tannenbaum, Allen

    2016-01-01

    This work presents a deformable point set registration algorithm that seeks an optimal set of radial basis functions to describe the registration. A novel, global optimization approach is introduced composed of simulated annealing with a particle filter based generator function to perform the registration. It is shown how constraints can be incorporated into this framework. A constraint on the deformation is enforced whose role is to ensure physically meaningful fields (i.e., invertible). Further, examples in which landmark constraints serve to guide the registration are shown. Results on 2D and 3D data demonstrate the algorithm’s robustness to noise and missing information. PMID:26761731

  4. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approachstochastic approach≤probability predicted by Davis and Stoll < probability predicted by Martin et al. The differences are explained by the positive bias of the Martin equation and the lower average resolution observed for the isocratic simulations compared to the gradient simulations with the same peak capacity. When the stochastic results are applied to conventional HPLC and sequential elution liquid chromatography (SE-LC), the latter is shown to provide much greater probabilities of success for moderately complex samples (e.g., PHPLC=31.2% versus PSE-LC=69.1% for 12 components and the same analysis time). For a given number of components, the density of probability data provided over the range of peak capacities is sufficient to allow accurate interpolation of probabilities for peak capacities not reported, <1.5% error for saturation factors <0.20. Additional applications for the stochastic approach include isothermal and programmed-temperature gas chromatography.

  5. Skull base approaches in neurosurgery

    PubMed Central

    2010-01-01

    The skull base surgery is one of the most demanding surgeries. There are different structures that can be injured easily, by operating in the skull base. It is very important for the neurosurgeon to choose the right approach in order to reach the lesion without harming the other intact structures. Due to the pioneering work of Cushing, Hirsch, Yasargil, Krause, Dandy and other dedicated neurosurgeons, it is possible to address the tumor and other lesions in the anterior, the mid-line and the posterior cranial base. With the transsphenoidal, the frontolateral, the pterional and the lateral suboccipital approach nearly every region of the skull base is exposable. In the current state many different skull base approaches are described for various neurosurgical diseases during the last 20 years. The selection of an approach may differ from country to country, e.g., in the United States orbitozygomaticotomy for special lesions of the anterior skull base or petrosectomy for clivus meningiomas, are found more frequently than in Europe. The reason for writing the review was the question: Are there keyhole approaches with which someone can deal with a vast variety of lesions in the neurosurgical field? In my opinion the different surgical approaches mentioned above cover almost 95% of all skull base tumors and lesions. In the following text these approaches will be described. These approaches are: 1) pterional approach 2) frontolateral approach 3) transsphenoidal approach 4) suboccipital lateral approach These approaches can be extended and combined with each other. In the following we want to enhance this philosophy. PMID:20602753

  6. Lifetimes and on off distributions for single-molecule kinetics. Stochastic approach and extraction of information from experimental data

    NASA Astrophysics Data System (ADS)

    Vlad, Marcel O.; Moran, Federico; Ross, John

    2003-02-01

    We introduce a stochastic approach for the computation of lifetime distributions of various chemical states in single-molecule kinetics, where the rate coefficients of the process are random functions of time. We consider a given realization for the rate coefficients and derive a partial differential equation for the instantaneous, fluctuating values of lifetime distributions and solve it by using the method of characteristics. The overall lifetime distributions are dynamic averages of the fluctuating lifetime distributions over all possible values of the rate coefficients. We develop methods of evaluating these dynamical averages for various types of stochastic processes, which describe the fluctuations of the rate coefficients. For a single molecule with two chemical states, of which one is fluorescent and the other not, the lifetime distributions are the same as the on and off time distributions, which are experimental observables. In this case we discuss in detail the relationships between intramolecular dynamics and the on and off time distributions. We develop methods for extracting quantitative and qualitative information about intramolecular fluctuations from measured values of on and of time distributions, for determining if the intramolecular fluctuations are long range or short range, and for the evaluation of the statistical properties of the fluctuating rate coefficients.

  7. Phaseless quantum Monte-Carlo approach to strongly correlated superconductors with stochastic Hartree-Fock-Bogoliubov wavefunctions

    NASA Astrophysics Data System (ADS)

    Juillet, Olivier; Leprévost, Alexandre; Bonnard, Jérémy; Frésard, Raymond

    2017-04-01

    The so-called phaseless quantum Monte-Carlo method currently offers one of the best performing theoretical framework to investigate interacting Fermi systems. It allows to extract an approximate ground-state wavefunction by averaging independent-particle states undergoing a Brownian motion in imaginary-time. Here, we extend the approach to a random walk in the space of Hartree-Fock-Bogoliubov (HFB) vacua that are better suited for superconducting or superfluid systems. Well-controlled statistical errors are ensured by constraining stochastic paths with the help of a trial wavefunction. It also guides the dynamics and takes the form of a linear combination of HFB ansätze. Estimates for the observables are reconstructed through an extension of Wick’s theorem to matrix elements between HFB product states. The usual combinatory complexity associated to the application of this theorem for four- and more- body operators is bypassed with a compact expression in terms of Pfaffians. The limiting case of a stochastic motion within Slater determinants but guided with HFB trial wavefunctions is also considered. Finally, exploratory results for the spin-polarized Hubbard model in the attractive regime are presented.

  8. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    PubMed

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. MOSES: A Matlab-based open-source stochastic epidemic simulator.

    PubMed

    Varol, Huseyin Atakan

    2016-08-01

    This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.

  10. Information-based measures for logical stochastic resonance in a synthetic gene network under Lévy flight superdiffusion

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Xu, Yong; Wang, Haiyan; Kurths, Jürgen

    2017-06-01

    We investigate the logical information transmission of a synthetic gene network under Lévy flight superdiffusion by an information-based methodology. We first present the stochastic synthetic gene network model driven by a square wave signal under Lévy noise caused by Lévy flight superdiffusion. Then, to quantify the potential of logical information transmission and logical stochastic resonance, we theoretically obtain an information-based methodology of the symbol error rate, the noise entropy, and the mutual information of the logical information transmission. Consequently, based on the complementary "on" and "off" states shown in the logical information transmission for the repressive proteins, we numerically calculate the symbol error rate for logic gates, which demonstrate that the synthetic gene network under Lévy noise can achieve some logic gates as well as logical stochastic resonance. Furthermore, we calculate the noise entropy and the mutual information between the square wave signal and the logical information transmission, which reveal and quantify the potential of logical information transmission and logical stochastic resonance. In addition, we analyze the synchronization degree of the mutual information for the accomplished logical stochastic resonance of two repressive proteins of the synthetic gene network by synchronization variances, which shows that those mutual information changes almost synchronously.

  11. Solution of stochastic media transport problems using a numerical quadrature-based method

    SciTech Connect

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.; Olson, A. J.

    2013-07-01

    We present a new conceptual framework for analyzing transport problems in random media. We decompose such problems into stratified subproblems according to the number of material pseudo-interfaces within realizations. For a given subproblem we assign pseudo-interface locations in each realization according to product quadrature rules, which allows us to deterministically generate a fixed number of realizations. Quadrature integration of the solutions of these realizations thus approximately solves each subproblem; the weighted superposition of solutions of the subproblems approximately solves the general stochastic media transport problem. We revisit some benchmark problems to determine the accuracy and efficiency of this approach in comparison to randomly generated realizations. We find that this method is very accurate and fast when the number of pseudo-interfaces in a problem is generally low, but that these advantages quickly degrade as the number of pseudo-interfaces increases. (authors)

  12. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    DOE PAGES

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    2016-01-01

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less

  13. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix and Polymer Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.

    2016-01-01

    Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.

  14. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    SciTech Connect

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    2016-01-01

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total cost of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.

  15. Stochastic approach to the generalized Schrödinger equation: A method of eigenfunction expansion.

    PubMed

    Tsuchida, Satoshi; Kuratsuji, Hiroshi

    2015-05-01

    Using a method of eigenfunction expansion, a stochastic equation is developed for the generalized Schrödinger equation with random fluctuations. The wave field ψ is expanded in terms of eigenfunctions: ψ=∑(n)a(n)(t)ϕ(n)(x), with ϕ(n) being the eigenfunction that satisfies the eigenvalue equation H(0)ϕ(n)=λ(n)ϕ(n), where H(0) is the reference "Hamiltonian" conventionally called the "unperturbed" Hamiltonian. The Langevin equation is derived for the expansion coefficient a(n)(t), and it is converted to the Fokker-Planck (FP) equation for a set {a(n)} under the assumption of Gaussian white noise for the fluctuation. This procedure is carried out by a functional integral, in which the functional Jacobian plays a crucial role in determining the form of the FP equation. The analyses are given for the FP equation by adopting several approximate schemes.

  16. Computational chemistry approach to protein kinase recognition using 3D stochastic van der Waals spectral moments.

    PubMed

    González-Díaz, Humberto; Saíz-Urra, Liane; Molina, Reinaldo; González-Díaz, Yenny; Sánchez-González, Angeles

    2007-04-30

    Three-dimensional (3D) protein structures now frequently lack functional annotations because of the increase in the rate at which chemical structures are solved with respect to experimental knowledge of biological activity. As a result, predicting structure-function relationships for proteins is an active research field in computational chemistry and has implications in medicinal chemistry, biochemistry and proteomics. In previous studies stochastic spectral moments were used to predict protein stability or function (González-Díaz, H. et al. Bioorg Med Chem 2005, 13, 323; Biopolymers 2005, 77, 296). Nevertheless, these moments take into consideration only electrostatic interactions and ignore other important factors such as van der Waals interactions. The present study introduces a new class of 3D structure molecular descriptors for folded proteins named the stochastic van der Waals spectral moments ((o)beta(k)). Among many possible applications, recognition of kinases was selected due to the fact that previous computational chemistry studies in this area have not been reported, despite the widespread distribution of kinases. The best linear model found was Kact = -9.44 degrees beta(0)(c) +10.94 degrees beta(5)(c) -2.40 degrees beta(0)(i) + 2.45 degrees beta(5)(m) + 0.73, where core (c), inner (i) and middle (m) refer to specific spatial protein regions. The model with a high Matthew's regression coefficient (0.79) correctly classified 206 out of 230 proteins (89.6%) including both training and predicting series. An area under the ROC curve of 0.94 differentiates our model from a random classifier. A subsequent principal components analysis of 152 heterogeneous proteins demonstrated that beta(k) codifies information different to other descriptors used in protein computational chemistry studies. Finally, the model recognizes 110 out of 125 kinases (88.0%) in a virtual screening experiment and this can be considered as an additional validation study (these proteins

  17. Unifying Vertical and Nonvertical Evolution: A Stochastic ARG-based Framework

    PubMed Central

    Bloomquist, Erik W.; Suchard, Marc A.

    2010-01-01

    Evolutionary biologists have introduced numerous statistical approaches to explore nonvertical evolution, such as horizontal gene transfer, recombination, and genomic reassortment, through collections of Markov-dependent gene trees. These tree collections allow for inference of nonvertical evolution, but only indirectly, making findings difficult to interpret and models difficult to generalize. An alternative approach to explore nonvertical evolution relies on phylogenetic networks. These networks provide a framework to model nonvertical evolution but leave unanswered questions such as the statistical significance of specific nonvertical events. In this paper, we begin to correct the shortcomings of both approaches by introducing the “stochastic model for reassortment and transfer events” (SMARTIE) drawing upon ancestral recombination graphs (ARGs). ARGs are directed graphs that allow for formal probabilistic inference on vertical speciation events and nonvertical evolutionary events. We apply SMARTIE to phylogenetic data. Because of this, we can typically infer a single most probable ARG, avoiding coarse population dynamic summary statistics. In addition, a focus on phylogenetic data suggests novel probability distributions on ARGs. To make inference with our model, we develop a reversible jump Markov chain Monte Carlo sampler to approximate the posterior distribution of SMARTIE. Using the BEAST phylogenetic software as a foundation, the sampler employs a parallel computing approach that allows for inference on large-scale data sets. To demonstrate SMARTIE, we explore 2 separate phylogenetic applications, one involving pathogenic Leptospirochete and the other Saccharomyces. PMID:20525618

  18. Localization of nonlinear damage using state-space-based predictions under stochastic excitation

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Mao, Zhu; Todd, Michael; Huang, Zongming

    2014-02-01

    This paper presents a study on localizing damage under stochastic excitation by state-space-based methods, where the damaged response contains some nonlinearity. Two state-space-based modeling algorithms, namely auto- and cross-predictions, are employed in this paper, and the greatest prediction error will be achieved at the sensor pair closest to the actual damage, in terms of localization. To quantify the distinction of prediction error distributions obtained at different sensor locations, the Bhattacharyya distance is adopted as the quantification metric. There are two lab-scale test-beds adopted as validation platforms, including a two-story plane steel frame with bolt loosening damage and a three-story benchmark aluminum frame with a simulated tunable crack. Band-limited Gaussian noise is applied through an electrodynamic shaker to the systems. Testing results indicate that the damage detection capability of the state-space-based method depends on the nonlinearity-induced high frequency responses. Since those high frequency components attenuate quickly in time and space, the results show great capability for damage localization, i.e., the highest deviation of Bhattacharyya distance is coincident with the sensors close to the physical damage location. This work extends the state-space-based damage detection method for localizing damage to a stochastically excited scenario, which provides the advantage of compatibility with ambient excitations. Moreover, results from both experiments indicate that the state-space-based method is only sensitive to nonlinearity-induced damage, thus it can be utilized in parallel with linear classifiers or normalization strategies to insulate the operational and environmental variability, which often affects the system response in a linear fashion.

  19. Further results for global exponential stability of stochastic memristor-based neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Zhu, Song; Yang, Qiqi

    2016-11-01

    In recent years, the stability problems of memristor-based neural networks have been studied extensively. This paper not only takes the unavoidable noise into consideration but also investigates the global exponential stability of stochastic memristor-based neural networks with time-varying delays. The obtained criteria are essentially new and complement previously known ones, which can be easily validated with the parameters of system itself. In addition, the study of the nonlinear dynamics for the addressed neural networks may be helpful in qualitative analysis for general stochastic systems. Finally, two numerical examples are provided to substantiate our results.

  20. Dissipativity analysis of stochastic memristor-based recurrent neural networks with discrete and distributed time-varying delays.

    PubMed

    Radhika, Thirunavukkarasu; Nagamani, Gnaneswaran

    2016-01-01

    In this paper, based on the knowledge of memristor-based recurrent neural networks (MRNNs), the model of the stochastic MRNNs with discrete and distributed delays is established. In real nervous systems and in the implementation of very large-scale integration (VLSI) circuits, noise is unavoidable, which leads to the stochastic model of the MRNNs. In this model, the delay interval is decomposed into two subintervals by using the tuning parameter α such that 0 < α < 1. By constructing proper Lyapunov-Krasovskii functional and employing direct delay decomposition technique, several sufficient conditions are given to guarantee the dissipativity and passivity of the stochastic MRNNs with discrete and distributed delays in the sense of Filippov solutions. Using the stochastic analysis theory and Itô's formula for stochastic differential equations, we establish sufficient conditions for dissipativity criterion. The dissipativity and passivity conditions are presented in terms of linear matrix inequalities, which can be easily solved by using Matlab Tools. Finally, three numerical examples with simulations are presented to demonstrate the effectiveness of the theoretical results.

  1. A new program based on stochastic Liouville equation for the analysis of superhyperfine interaction in CW-ESR spectroscopy

    NASA Astrophysics Data System (ADS)

    Della Lunga, Giovanni; Pezzato, Michela; Baratto, Maria Camilla; Pogni, Rebecca; Basosi, Riccardo

    2003-09-01

    In the slow-motion region, ESR spectra cannot be expressed as a sum of simple Lorentzian lines. Studies of Freed and co-workers, on nitroxides in liquids gained information on the microscopic models of rotational dynamics, relying much on computer programs for simulation of ESR spectra based on the stochastic Liouville equation (SLE). However, application of Freed's method to copper system of biological interest has been for a long time precluded by lack of a full program able to simulate ESR spectra containing more than one hyperfine interaction. Direct extension of the Freed's approach in order to include superhyperfine interaction is not difficult from a theoretical point of view but the resulting algorithm is problematical because it leads to substantial increase in the dimensions of the matrix related to the spin-hamiltonian operator. In this paper preliminary results of a new program, written in C, which includes the superhyperfine interactions are presented. This preliminary version of the program does not take into account a restoring potential, so it can be used only in isotropic diffusion conditions. A comparison with an approximate method previously developed in our laboratory, based on a post-convolution approach, is discussed.

  2. A new program based on stochastic Liouville equation for the analysis of superhyperfine interaction in CW-ESR spectroscopy.

    PubMed

    Della Lunga, Giovanni; Pezzato, Michela; Baratto, Maria Camilla; Pogni, Rebecca; Basosi, Riccardo

    2003-09-01

    In the slow-motion region, ESR spectra cannot be expressed as a sum of simple Lorentzian lines. Studies of Freed and co-workers, on nitroxides in liquids gained information on the microscopic models of rotational dynamics, relying much on computer programs for simulation of ESR spectra based on the stochastic Liouville equation (SLE). However, application of Freed's method to copper system of biological interest has been for a long time precluded by lack of a full program able to simulate ESR spectra containing more than one hyperfine interaction. Direct extension of the Freed's approach in order to include superhyperfine interaction is not difficult from a theoretical point of view but the resulting algorithm is problematical because it leads to substantial increase in the dimensions of the matrix related to the spin-hamiltonian operator. In this paper preliminary results of a new program, written in C, which includes the superhyperfine interactions are presented. This preliminary version of the program does not take into account a restoring potential, so it can be used only in isotropic diffusion conditions. A comparison with an approximate method previously developed in our laboratory, based on a post-convolution approach, is discussed.

  3. Stochastic Dynamical Model of a Growing Citation Network Based on a Self-Exciting Point Process

    NASA Astrophysics Data System (ADS)

    Golosovsky, Michael; Solomon, Sorin

    2012-08-01

    We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40 195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.

  4. The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

    PubMed Central

    Jiang, Kuosheng.; Xu, Guanghua.; Liang, Lin.; Tao, Tangfei.; Gu, Fengshou.

    2014-01-01

    In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test. PMID:25076220

  5. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  6. A Bayesian Approach for Evaluation of Determinants of Health System Efficiency Using Stochastic Frontier Analysis and Beta Regression

    PubMed Central

    2016-01-01

    In today's world, Public expenditures on health are one of the most important issues for governments. These increased expenditures are putting pressure on public budgets. Therefore, health policy makers have focused on the performance of their health systems and many countries have introduced reforms to improve the performance of their health systems. This study investigates the most important determinants of healthcare efficiency for OECD countries using second stage approach for Bayesian Stochastic Frontier Analysis (BSFA). There are two steps in this study. First we measure 29 OECD countries' healthcare efficiency by BSFA using the data from the OECD Health Database. At second stage, we expose the multiple relationships between the healthcare efficiency and characteristics of healthcare systems across OECD countries using Bayesian beta regression. PMID:27118987

  7. Developing Itô stochastic differential equation models for neuronal signal transduction pathways.

    PubMed

    Manninen, Tiina; Linne, Marja-Leena; Ruohonen, Keijo

    2006-08-01

    Mathematical modeling and simulation of dynamic biochemical systems are receiving considerable attention due to the increasing availability of experimental knowledge of complex intracellular functions. In addition to deterministic approaches, several stochastic approaches have been developed for simulating the time-series behavior of biochemical systems. The problem with stochastic approaches, however, is the larger computational time compared to deterministic approaches. It is therefore necessary to study alternative ways to incorporate stochasticity and to seek approaches that reduce the computational time needed for simulations, yet preserve the characteristic behavior of the system in question. In this work, we develop a computational framework based on the Itô stochastic differential equations for neuronal signal transduction networks. There are several different ways to incorporate stochasticity into deterministic differential equation models and to obtain Itô stochastic differential equations. Two of the developed models are found most suitable for stochastic modeling of neuronal signal transduction. The best models give stable responses which means that the variances of the responses with time are not increasing and negative concentrations are avoided. We also make a comparative analysis of different kinds of stochastic approaches, that is the Itô stochastic differential equations, the chemical Langevin equation, and the Gillespie stochastic simulation algorithm. Different kinds of stochastic approaches can be used to produce similar responses for the neuronal protein kinase C signal transduction pathway. The fine details of the responses vary slightly, depending on the approach and the parameter values. However, when simulating great numbers of chemical species, the Gillespie algorithm is computationally several orders of magnitude slower than the Itô stochastic differential equations and the chemical Langevin equation. Furthermore, the chemical

  8. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  9. Stochastic bias correction of dynamically downscaled precipitation fields for Germany through copula-based integration of gridded observation data

    NASA Astrophysics Data System (ADS)

    Mao, G.; Vogl, S.; Laux, P.; Wagner, S.; Kunstmann, H.

    2014-07-01

    Dynamically downscaled precipitation fields from regional climate model (RCM) often cannot be used directly for local climate change impact studies. Due to their inherent biases, i.e. systematic over- or underestimations compared to observations, several correction approaches have been developed. Most of the bias correction procedures such as the quantile mapping approach employ a transfer function that based on the statistical differences between RCM output and observations. Apart from such transfer function based statistical correction algorithms, a stochastic bias correction technique, based on the concept of Copula theory, is developed here and applied to correct precipitation fields from the Weather Research and Forecasting (WRF) model. As Dynamically downscaled precipitation fields we used high resolution (7 km, daily) WRF simulations for Germany driven by ERA40 reanalysis data for 1971-2000. The REGNIE data set from Germany Weather Service is used as gridded observation data (1 km, daily) and rescaled to 7 km for this application. The 30 year time series are splitted into a calibration (1971-1985) and validation (1986-2000) period of equal length. Based on the estimated dependence structure between WRF and REGNIE data and the identified respective marginal distributions in calibration period, separately analyzed for the different seasons, conditional distribution functions are derived for each time step in validation period. This finally allows to get additional information about the range of the statistically possible bias corrected values. The results show that the Copula-based approach efficiently corrects most of the errors in WRF derived precipitation for all seasons. It is also found that the Copula-based correction performs better for wet bias correction than for dry bias correction. In autumn and winter, the correction introduced a small dry bias in the Northwest of Germany. The average relative bias of daily mean precipitation from WRF for the

  10. Stochastic finite element model calibration based on frequency responses and bootstrap sampling

    NASA Astrophysics Data System (ADS)

    Vakilzadeh, Majid K.; Yaghoubi, Vahid; Johansson, Anders T.; Abrahamsson, Thomas J. S.

    2017-05-01

    A new stochastic finite element model calibration framework for estimation of the uncertainty in model parameters and predictions from the measured frequency responses is proposed in this paper. It combines the principles of bootstrapping with the technique of FE model calibration with damping equalization. The challenge for the calibration problem is to find an initial estimate of the parameters that is reasonably close to the global minimum of the deviation between model predictions and measurement data. The idea of model calibration with damping equalization is to formulate the calibration metric as the deviation between the logarithm of the frequency responses of FE model and a test data model found from measurement where the same level of modal damping is imposed on all modes. This formulation gives a smooth metric with a large radius of convergence to the global minimum. In this study, practical suggestions are made to improve the performance of this calibration procedure in dealing with noisy measurements. A dedicated frequency sampling strategy is suggested for measurement of frequency responses in order to improve the estimate of a test data model. The deviation metric at each frequency line is weighted using the signal-to-noise ratio of the measured frequency responses. The solution to the improved calibration procedure with damping equalization is viewed as a starting value for the optimization procedure used for uncertainty quantification. The experimental data is then resampled using the bootstrapping approach and the FE model calibration problem, initiating from the estimated starting value, is solved on each individual resampled dataset to produce uncertainty bounds on the model parameters and predictions. The proposed stochastic model calibration framework is demonstrated on a six degree-of-freedom spring-mass system prior to being applied to a general purpose satellite structure.

  11. Quantifying rock's structural fabric: a multi-scale hierarchical approach to natural fracture systems and stochastic modelling

    NASA Astrophysics Data System (ADS)

    Hardebol, Nico; Bertotti, Giovanni; Weltje, Gert Jan

    2014-05-01

    We propose the description of fracture-fault systems in terms of a multi-scale hierarchical network. In most generic form, such arrangement is referred to as a structural fabric and applicable across the length scale spectrum. The statistical characterisation combines the fracture length and orientation distributions and intersection-termination relationships. The aim is a parameterised description of the network that serves as input in stochastic network simulations that should reproduce the essence of natural fracture networks and encompass its variability. The quality of the stochastically generated fabric is determined by comparison with deterministic descriptions on which the model parameterisation is based. Both the deterministic and stochastic derived fracture network description can serve as input in fluid flow or mechanical simulations that accounts explicitly for the discrete features and the response of the system can be compared. The deterministic description of our current study in the framework of tight gas reservoirs is obtained from coastal pavements that expose a horizontal slice through a fracture-fault network system in fine grained sediments in Yorkshire, UK. Fracture hierarchies have often been described at one observation scale as a two-tier hierarchy in terms of 1st order systematic joints and 2nd order cross-joints. New in our description is the bridging between km-sized faults with notable displacement down to sub-meter scale shear and opening mode fractures. This study utilized a drone to obtain cm-resolution imagery of pavements from ~30m altitude and the large coverage up to 1-km by flying at a ~80m. This unique set of images forms the basis for the digitizing of the fracture-fault pattern and helped determining the nested nature of the network as well as intersection and abutment relationships. Fracture sets were defined from the highest to lowest hierarchical order and probability density functions were defined for the length

  12. Stochastic gravity

    NASA Astrophysics Data System (ADS)

    Ross, D. K.; Moreau, William

    1995-08-01

    We investigate stochastic gravity as a potentially fruitful avenue for studying quantum effects in gravity. Following the approach of stochastic electrodynamics ( sed), as a representation of the quantum gravity vacuum we construct a classical state of isotropic random gravitational radiation, expressed as a spin-2 field,h µυ (x), composed of plane waves of random phase on a flat spacetime manifold. Requiring Lorentz invariance leads to the result that the spectral composition function of the gravitational radiation,h(ω), must be proportional to 1/ω 2. The proportionality constant is determined by the Planck condition that the energy density consist ofħω/2 per normal mode, and this condition sets the amplitude scale of the random gravitational radiation at the order of the Planck length, giving a spectral composition functionh(ω) =√16πc 2Lp/ω2. As an application of stochastic gravity, we investigate the Davies-Unruh effect. We calculate the two-point correlation function (R iojo(Oτ-δτ/2)R kolo(O,τ+δτ/2)) of the measureable geodesic deviation tensor field,R iojo, for two situations: (i) at a point detector uniformly accelerating through the random gravitational radiation, and (ii) at an inertial detector in a heat bath of the random radiation at a finite temperature. We find that the two correlation functions agree to first order inaδτ/c provided that the temperature and acceleration satisfy the relationkT=ħa/2πc.

  13. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  14. Rejection properties of stochastic-resonance-based detectors of weak harmonic signals

    NASA Astrophysics Data System (ADS)

    Croce, R. P.; Demma, Th.; Galdi, V.; Pierro, V.; Pinto, I. M.; Postiglione, F.

    2004-06-01

    In [

    V. Galdi et al., Phys. Rev. E 57, 6470 (1998)
    ] a thorough characterization in terms of receiver operating characteristics of stochastic-resonance detectors of weak harmonic signals of known frequency in additive Gaussian noise was given. It was shown that strobed sign-counting based strategies can be used to achieve a nice trade-off between performance and cost, by comparison with noncoherent correlators. Here we discuss the more realistic case where besides the sought signal (whose frequency is assumed known) further unwanted spectrally nearby signals with comparable amplitude are present. Rejection properties are discussed in terms of suitably defined false-alarm and false-dismissal probabilities for various values of interfering signal(s) strength and spectral separation.

  15. A Bloch decomposition-based stochastic Galerkin method for quantum dynamics with a random external potential

    SciTech Connect

    Wu, Zhizhang Huang, Zhongyi

    2016-07-15

    In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.

  16. Stochastic parametrization of multiscale processes using a dual-grid approach.

    PubMed

    Shutts, Glenn; Allen, Thomas; Berner, Judith

    2008-07-28

    Some speculative proposals are made for extending current stochastic sub-gridscale parametrization methods using the techniques adopted from the field of computer graphics and flow visualization. The idea is to emulate sub-filter-scale physical process organization and time evolution on a fine grid and couple the implied coarse-grained tendencies with a forecast model. A two-way interaction is envisaged so that fine-grid physics (e.g. deep convective clouds) responds to forecast model fields. The fine-grid model may be as simple as a two-dimensional cellular automaton or as computationally demanding as a cloud-resolving model similar to the coupling strategy envisaged in 'super-parametrization'. Computer codes used in computer games and visualization software illustrate the potential for cheap but realistic simulation where emphasis is placed on algorithmic stability and visual realism rather than pointwise accuracy in a predictive sense. In an ensemble prediction context, a computationally cheap technique would be essential and some possibilities are outlined. An idealized proof-of-concept simulation is described, which highlights technical problems such as the nature of the coupling.

  17. On the stochastic approach to inflation and the initial conditions in the universe

    NASA Astrophysics Data System (ADS)

    Pollock, M. D.

    1988-03-01

    By the application of stochastic methods to a theory in which a potential V(ø) causes a period of quasi-exponential expansion of the universe, an expression for the probability distribution P(V) appropriate for chaotic inflation has recently been derived. The method was developed by Starobinsky and by Linde. Beyond some critical point øc, long-wavelength quantum fluctuations δø ~H/2π cannot be ignored. The effect of these fluctuation in general relativity for values of ø such that V(ø)>V(ø) has been considered by Linde, who concluded that most of the present universe arises as a result of expansion of domains with a domains with a maximum possible value of ø, such that V(ømax ~ mp4. We obtain the corresponding expression for P in a broken-symmetry theory of gravity, in which the newtonian gravitational constant is replaced by G = (8πɛø2)-1, and also for a theory which includes higher-derivative terms R2 = γR2 + βR2 1n(R/μ2), so that the trace anomaly is Tanom ~βR2 , in which an effective inflation field øe can be defined as øe2 = 24γR. Conclusions analogous to those of Linde can be drawn in both these theories. Present address: Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Bombay 400.005, India.

  18. Combined reflectance spectroscopy and stochastic modeling approach for noninvasive hemoglobin determination via palpebral conjunctiva

    PubMed Central

    Kim, Oleg; McMurdy, John; Jay, Gregory; Lines, Collin; Crawford, Gregory; Alber, Mark

    2014-01-01

    Abstract A combination of stochastic photon propagation model in a multilayered human eyelid tissue and reflectance spectroscopy was used to study palpebral conjunctiva spectral reflectance for hemoglobin (Hgb) determination. The developed model is the first biologically relevant model of eyelid tissue, which was shown to provide very good approximation to the measured spectra. Tissue optical parameters were defined using previous histological and microscopy studies of a human eyelid. After calibration of the model parameters the responses of reflectance spectra to Hgb level and blood oxygenation variations were calculated. The stimulated reflectance spectra in adults with normal and low Hgb levels agreed well with experimental data for Hgb concentrations from 8.1 to 16.7 g/dL. The extracted Hgb levels were compared with in vitro Hgb measurements. The root mean square error of cross‐validation was 1.64 g/dL. The method was shown to provide 86% sensitivity estimates for clinically diagnosed anemia cases. A combination of the model with spectroscopy measurements provides a new tool for noninvasive study of human conjunctiva to aid in diagnosing blood disorders such as anemia. PMID:24744871

  19. Stochastic approach to diffusion inside the chaotic layer of a resonance.

    PubMed

    Mestre, Martín F; Bazzani, Armando; Cincotta, Pablo M; Giordano, Claudia M

    2014-01-01

    We model chaotic diffusion in a symplectic four-dimensional (4D) map by using the result of a theorem that was developed for stochastically perturbed integrable Hamiltonian systems. We explicitly consider a map defined by a free rotator (FR) coupled to a standard map (SM). We focus on the diffusion process in the action I of the FR, obtaining a seminumerical method to compute the diffusion coefficient. We study two cases corresponding to a thick and a thin chaotic layer in the SM phase space and we discuss a related conjecture stated in the past. In the first case, the numerically computed probability density function for the action I is well interpolated by the solution of a Fokker-Planck (FP) equation, whereas it presents a nonconstant time shift with respect to the concomitant FP solution in the second case suggesting the presence of an anomalous diffusion time scale. The explicit calculation of a diffusion coefficient for a 4D symplectic map can be useful to understand the slow diffusion observed in celestial mechanics and accelerator physics.

  20. A stochastic approach to uncertainty in the equations of MHD kinematics

    SciTech Connect

    Phillips, Edward G.; Elman, Howard C.

    2015-03-01

    The magnetohydrodynamic (MHD) kinematics model describes the electromagnetic behavior of an electrically conducting fluid when its hydrodynamic properties are assumed to be known. In particular, the MHD kinematics equations can be used to simulate the magnetic field induced by a given velocity field. While prescribing the velocity field leads to a simpler model than the fully coupled MHD system, this may introduce some epistemic uncertainty into the model. If the velocity of a physical system is not known with certainty, the magnetic field obtained from the model may not be reflective of the magnetic field seen in experiments. Additionally, uncertainty in physical parameters such as the magnetic resistivity may affect the reliability of predictions obtained from this model. By modeling the velocity and the resistivity as random variables in the MHD kinematics model, we seek to quantify the effects of uncertainty in these fields on the induced magnetic field. We develop stochastic expressions for these quantities and investigate their impact within a finite element discretization of the kinematics equations. We obtain mean and variance data through Monte Carlo simulation for several test problems. Toward this end, we develop and test an efficient block preconditioner for the linear systems arising from the discretized equations.

  1. A Stochastic Foundation of the Approach to Equilibrium of Classical and Quantum Gases

    NASA Astrophysics Data System (ADS)

    Costantini, D.; Garibaldi, U.

    The Ehrenfest urn model is one of the most instructive models in the whole of Physics. It was thought to give a qualitative account for notions like reversibility, periodicity and tendency to equilibrium. The model, often referred to as the Ehrenfest dog-flea model, is mentioned in almost every textbook of probability, stochastic processes and statistical physics. Ehrenfest's model must not be limited to classical particles, but it can be extended to quantum particles. We make this extention in a purely probabilistic way. We do not refer to notions like (in)distinguishability that, in our opinion, have an epistemological and physical status far from clear. For all types of particles, we deduce the equilibrium probabilities in a purely probabilistic way. To accomplish our goal, we start by considering a set of probability conditions. On this basis, we deduce the formulae of creation and destruction probabilities for classical particles, bosons and fermions. These enable the deduction of the transition probabilities we are interested in. Via the master equation, these transition probabilities enable us to derive the equilibrium distributions.

  2. A stochastic-dynamical approach to the study of the natural variability of the climate

    NASA Technical Reports Server (NTRS)

    Straus, D. M.; Halem, M.

    1981-01-01

    A method, suggested by Leith (1975), which employed stochastic-dynamic forecasts obtained from a general circulation model in such a way as to satisfy the definition of climatic noise, was used to validate assumptions accounting for the effects of external influences in estimating the climatic noise. Two assumptions were investigated: (1) that the weather fluctuations can be represented as a Markov process, and (2) that changing external conditions do not influence the atmosphere's statistical properties on short time scales. The general circulation model's simulation of the daily weather fluctuations was generated by performing integrations with prescribed climatological boundary conditions for random initial atmospheric states, with resulting dynamical forecasts providing an ensemble of simulated data for the autoregressive modeling of weather fluctuations. To estimate the climatic noise from the observational data (consisting of hourly values of sea level pressure and surface temperature at 54 U.S. stations for the month of January for the years 1949-1975) use of the short time-scale assumption is made. The simulated and observed data were found not to be consistent with either white noise or a Markov process of weather fluctuations. Good agreement was found between the results of the hypothetical testing of the simulated and the observed surface temperatures; and only partial support was found for the short time-scale assumption, i.e., for sea level pressure.

  3. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.

    PubMed

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-06-17

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.

  4. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid

    PubMed Central

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-01-01

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO2 emissions and operation costs in UCS and LCS. PMID:27322281

  5. Stochastic Subspace-Based Structural Identification and Damage Detection and LOCALISATION—APPLICATION to the Z24 Bridge Benchmark

    NASA Astrophysics Data System (ADS)

    Mevel, L.; Goursat, M.; Basseville, M.

    2003-01-01

    Numerical results from the application of new stochastic subspace-based structural identification and damage detection and localisation methods to the Z24 concrete bridge of EMPA are discussed. For this benchmark, particular emphasis is put on damage detection and localisation.

  6. Beyond the SCS curve number: A new stochastic spatial runoff approach

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  7. New approach of financial volatility duration dynamics by stochastic finite-range interacting voter system

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Wang, Jun

    2017-01-01

    We make an approach on investigating the fluctuation behaviors of financial volatility duration dynamics. A new concept of volatility two-component range intensity (VTRI) is developed, which constitutes the maximal variation range of volatility intensity and shortest passage time of duration, and can quantify the investment risk in financial markets. In an attempt to study and describe the nonlinear complex properties of VTRI, a random agent-based financial price model is developed by the finite-range interacting biased voter system. The autocorrelation behaviors and the power-law scaling behaviors of return time series and VTRI series are investigated. Then, the complexity of VTRI series of the real markets and the proposed model is analyzed by Fuzzy entropy (FuzzyEn) and Lempel-Ziv complexity. In this process, we apply the cross-Fuzzy entropy (C-FuzzyEn) to study the asynchrony of pairs of VTRI series. The empirical results reveal that the proposed model has the similar complex behaviors with the actual markets and indicate that the proposed stock VTRI series analysis and the financial model are meaningful and feasible to some extent.

  8. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  9. New approach of financial volatility duration dynamics by stochastic finite-range interacting voter system.

    PubMed

    Wang, Guochao; Wang, Jun

    2017-01-01

    We make an approach on investigating the fluctuation behaviors of financial volatility duration dynamics. A new concept of volatility two-component range intensity (VTRI) is developed, which constitutes the maximal variation range of volatility intensity and shortest passage time of duration, and can quantify the investment risk in financial markets. In an attempt to study and describe the nonlinear complex properties of VTRI, a random agent-based financial price model is developed by the finite-range interacting biased voter system. The autocorrelation behaviors and the power-law scaling behaviors of return time series and VTRI series are investigated. Then, the complexity of VTRI series of the real markets and the proposed model is analyzed by Fuzzy entropy (FuzzyEn) and Lempel-Ziv complexity. In this process, we apply the cross-Fuzzy entropy (C-FuzzyEn) to study the asynchrony of pairs of VTRI series. The empirical results reveal that the proposed model has the similar complex behaviors with the actual markets and indicate that the proposed stock VTRI series analysis and the financial model are meaningful and feasible to some extent.

  10. A mean field approach to the watershed response under stochastic seasonal forcing

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Rodriguez-Iturbe, I.; Porporato, A. M.

    2016-12-01

    Mean field theory (MFT) is commonly used in statistical physics when modeling the space-time behavior of complex systems. The main premise of MFT is to replace multi-component interactions with an effective interaction to an average (i.e. lumped) field value. Thus, a many body problem is reduced to a one body problem. In watershed hydrology, the numerous interactions between watershed points are reduced to points interacting with more tractable watershed (unit area) averages. Through MFT, we consistently link point scale behavior to lumped (unit area) watershed behavior. We show that MFT links the local rainfall-runoff behavior to the runoff thresholds observed at both the watershed and hillslope scales of experiment catchments. The watershed scale water balance, which includes the lumped local effects, may be coupled to a probabilistic description of seasonal rainfall. Based on this seasonal description, we find an analytical expression for the distribution of the average (unit area) soil water storage. In turn, this seasonal distribution provides analytical expressions for the seasonal distributions of watershed scale evapotranspiration and runoff fluxes. Through MFT, we may disaggregate the average (unit area lumped) fluxes into specific local values explicitly mapped to the watershed area. We map the spatial variation of these fluxes under different seasonal conditions. In comparison to fully-distributed models, this approach is a simpler analytical alternative for testing and refining point scale theories in relation to climatic changes and experimental measurements at the hillslope and watershed scales.

  11. Coupled molecular dynamics-stochastic model for thermal conductivity of ethylene glycol based copper nanofluid.

    PubMed

    Ghosh, M M; Rai, R K

    2014-04-01

    A coupled molecular dynamics (MD)-stochastic simulation based model has been proposed here for the thermal conductivity of ethylene glycol (EG) based copper nanofluid. The model is based on the thermal evolution of the nanoparticles dispersed in the nanofluid which is in contact with a heat source. It is natural that the nanoparticles dispersed in the nanofluid would move randomly by Brownian motion and repeatedly collide with the heat source. During each collision the nanoparticles would extract some heat by conduction mode from the heat source and this heat would be dissipated to the base fluid during Brownian motion by a combination of conduction and microconvection mode. Thus, in addition to normal conductive heat transfer through the base fluid (EG) itself (without nanoparticles) some amount of heat is transferred by the collision of the nanoparticles with the heat source. The extent of this additional heat transfer has been estimated in the present model to estimate the enhancement in thermal conductivity of EG based copper nanofluid, as a function of volume fraction loading of nanoparticles. The prediction of the present model has been compared with the experimental data available in literature, and it has shown a reasonable agreement between the theoretical prediction and the experimental data.

  12. Stochastic Set-Based Particle Swarm Optimization Based on Local Exploration for Solving the Carpool Service Problem.

    PubMed

    Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia

    2016-08-01

    The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.

  13. Stochastic longshore current dynamics

    NASA Astrophysics Data System (ADS)

    Restrepo, Juan M.; Venkataramani, Shankar

    2016-12-01

    We develop a stochastic parametrization, based on a 'simple' deterministic model for the dynamics of steady longshore currents, that produces ensembles that are statistically consistent with field observations of these currents. Unlike deterministic models, stochastic parameterization incorporates randomness and hence can only match the observations in a statistical sense. Unlike statistical emulators, in which the model is tuned to the statistical structure of the observation, stochastic parametrization are not directly tuned to match the statistics of the observations. Rather, stochastic parameterization combines deterministic, i.e physics based models with stochastic models for the "missing physics" to create hybrid models, that are stochastic, but yet can be used for making predictions, especially in the context of data assimilation. We introduce a novel measure of the utility of stochastic models of complex processes, that we call consistency of sensitivity. A model with poor consistency of sensitivity requires a great deal of tuning of parameters and has a very narrow range of realistic parameters leading to outcomes consistent with a reasonable spectrum of physical outcomes. We apply this metric to our stochastic parametrization and show that, the loss of certainty inherent in model due to its stochastic nature is offset by the model's resulting consistency of sensitivity. In particular, the stochastic model still retains the forward sensitivity of the deterministic model and hence respects important structural/physical constraints, yet has a broader range of parameters capable of producing outcomes consistent with the field data used in evaluating the model. This leads to an expanded range of model applicability. We show, in the context of data assimilation, the stochastic parametrization of longshore currents achieves good results in capturing the statistics of observation that were not used in tuning the model.

  14. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-12-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  15. Drought-related leaf phenology in tropical forests - Insights from a stochastic eco-hydrological approach

    NASA Astrophysics Data System (ADS)

    Vico, G.; Feng, X.; Dralle, D.; Thompson, S. E.; Manzoni, S.

    2016-12-01

    Drought deciduousness is a common phenological strategy to cope with water shortages during periodic dry spells or during the dry season in tropical forests. On one hand, shedding leaves allows avoiding drought stress, but implies leaf construction costs that evergreen species need to sustain less frequently. On the other hand, maintaining leaves during dry periods requires stable water sources, traits enabling leaves to remain active at low water potential, and carbon stores to sustain respiration costs in periods with little carbon uptake. Which of these strategies is the most competitive ultimately depends on the balance of carbon costs and gains in the long-term. In turn, this balance is affected by the hydro-climatic conditions, in terms of both length of the dry season and random rainfall occurrences during the wet season. To address the question as to which hydro-climatic conditions favor drought-deciduous vs. evergreen leaf habit in tropical forests, we develop a stochastic eco-hydrological framework that provides probability density functions of long-term carbon gain in tropical trees with a range of phenological strategies. From these distributions we compute the long-term mean carbon gain and use it as a measure of fitness and thus reproductive success. Finally, this measure is used to assess which phenological strategies are evolutionarily stable, providing an objective criterion to predict how likely a species with a certain phenological strategy is to invade a community dominated but another strategy. In general, we find that deciduous habit is evolutionary stable in more unpredictable climates for a given total rainfall, and in drier climates. However, a minimum annual rainfall is required for any strategy to have a positive carbon gain.

  16. Microbial and Organic Fine Particle Transport Dynamics in Streams - a Combined Experimental and Stochastic Modeling Approach

    NASA Astrophysics Data System (ADS)

    Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron

    2014-05-01

    Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling

  17. Influence of initial reservoir level and gate failure in dam safety analysis. Stochastic approach

    NASA Astrophysics Data System (ADS)

    Gabriel-Martin, Ivan; Sordo-Ward, Alvaro; Garrote, Luis; Castillo, Luis G.

    2017-07-01

    This study proposes a stochastic methodology to assess the influence of considering variable reservoir levels prior to the arrival of floods in hydrological dam safety; introducing probability associated to gate failure scenarios. The methodology was applied to the Riaño dam (northern Spain) by analyzing the effects of incoming floods with return periods ranging from one to 10,000 years. We studied four scenarios with different gate failure rates and compared the results assuming initial reservoir level equal to the maximum level allowed in the reservoir under normal operation conditions with those considering variable initial reservoir levels. The ratio of the return periods associated to different reference levels reached in the reservoir considering variable over constant initial level ranged from 2.0 to 4.1. The ratio of the return periods obtained assuming gate failure and no failure for the same reference reservoir level ranged up to 93, 160 and 240 depending on the gate failure rate assigned. The ratio of the return periods associated to different maximum spillway discharges considering variable over constant initial reservoir level ranged from 2.5 to 6.1. However, the ratio of the return periods obtained assuming gate failure and no failure for the same discharge ranged from 0.7 to 1.1, showing no influence of gate failure. For the study case, our analysis highlighted the importance of considering the fluctuation of the initial reservoir levels and different gate failure scenarios, emphasizing that the return periods of maximum levels reached in the reservoir and maximum outflows are the variables that best represent dam and downstream hydrological safety.

  18. An Individual-Based Diploid Model Predicts Limited Conditions Under Which Stochastic Gene Expression Becomes Advantageous

    PubMed Central

    Matsumoto, Tomotaka; Mineta, Katsuhiko; Osada, Naoki; Araki, Hitoshi

    2015-01-01

    Recent studies suggest the existence of a stochasticity in gene expression (SGE) in many organisms, and its non-negligible effect on their phenotype and fitness. To date, however, how SGE affects the key parameters of population genetics are not well understood. SGE can increase the phenotypic variation and act as a load for individuals, if they are at the adaptive optimum in a stable environment. On the other hand, part of the phenotypic variation caused by SGE might become advantageous if individuals at the adaptive optimum become genetically less-adaptive, for example due to an environmental change. Furthermore, SGE of unimportant genes might have little or no fitness consequences. Thus, SGE can be advantageous, disadvantageous, or selectively neutral depending on its context. In addition, there might be a genetic basis that regulates magnitude of SGE, which is often referred to as “modifier genes,” but little is known about the conditions under which such an SGE-modifier gene evolves. In the present study, we conducted individual-based computer simulations to examine these conditions in a diploid model. In the simulations, we considered a single locus that determines organismal fitness for simplicity, and that SGE on the locus creates fitness variation in a stochastic manner. We also considered another locus that modifies the magnitude of SGE. Our results suggested that SGE was always deleterious in stable environments and increased the fixation probability of deleterious mutations in this model. Even under frequently changing environmental conditions, only very strong natural selection made SGE adaptive. These results suggest that the evolution of SGE-modifier genes requires strict balance among the strength of natural selection, magnitude of SGE, and frequency of environmental changes. However, the degree of dominance affected the condition under which SGE becomes advantageous, indicating a better opportunity for the evolution of SGE in different genetic

  19. Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.

    2016-01-01

    The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.

  20. Microscale characterisation of stochastically reconstructed carbon fiber-based Gas Diffusion Layers; effects of anisotropy and resin content

    NASA Astrophysics Data System (ADS)

    Yiotis, Andreas G.; Kainourgiakis, Michael E.; Charalambopoulou, Georgia C.; Stubos, Athanassios K.

    2016-07-01

    A novel process-based methodology is proposed for the stochastic reconstruction and accurate characterisation of Carbon fiber-based matrices, which are commonly used as Gas Diffusion Layers in Proton Exchange Membrane Fuel Cells. The modeling approach is efficiently complementing standard methods used for the description of the anisotropic deposition of carbon fibers, with a rigorous model simulating the spatial distribution of the graphitized resin that is typically used to enhance the structural properties and thermal/electrical conductivities of the composite Gas Diffusion Layer materials. The model uses as input typical pore and continuum scale properties (average porosity, fiber diameter, resin content and anisotropy) of such composites, which are obtained from X-ray computed microtomography measurements on commercially available carbon papers. This information is then used for the digital reconstruction of realistic composite fibrous matrices. By solving the corresponding conservation equations at the microscale in the obtained digital domains, their effective transport properties, such as Darcy permeabilities, effective diffusivities, thermal/electrical conductivities and void tortuosity, are determined focusing primarily on the effects of medium anisotropy and resin content. The calculated properties are matching very well with those of Toray carbon papers for reasonable values of the model parameters that control the anisotropy of the fibrous skeleton and the materials resin content.

  1. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  2. A data-driven model of the generation of human EEG based on a spatially distributed stochastic wave equation.

    PubMed

    Galka, Andreas; Ozaki, Tohru; Muhle, Hiltrud; Stephani, Ulrich; Siniatchkin, Michael

    2008-06-01

    We discuss a model for the dynamics of the primary current density vector field within the grey matter of human brain. The model is based on a linear damped wave equation, driven by a stochastic term. By employing a realistically shaped average brain model and an estimate of the matrix which maps the primary currents distributed over grey matter to the electric potentials at the surface of the head, the model can be put into relation with recordings of the electroencephalogram (EEG). Through this step it becomes possible to employ EEG recordings for the purpose of estimating the primary current density vector field, i.e. finding a solution of the inverse problem of EEG generation. As a technique for inferring the unobserved high-dimensional primary current density field from EEG data of much lower dimension, a linear state space modelling approach is suggested, based on a generalisation of Kalman filtering, in combination with maximum-likelihood parameter estimation. The resulting algorithm for estimating dynamical solutions of the EEG inverse problem is applied to the task of localising the source of an epileptic spike from a clinical EEG data set; for comparison, we apply to the same task also a non-dynamical standard algorithm.

  3. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-01-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  4. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  5. A Galerkin-based formulation of the probability density evolution method for general stochastic finite element systems

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Vissarion; Kalogeris, Ioannis

    2016-05-01

    The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.

  6. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  7. Stochastic P-bifurcation and stochastic resonance in a noisy bistable fractional-order system

    NASA Astrophysics Data System (ADS)

    Yang, J. H.; Sanjuán, Miguel A. F.; Liu, H. G.; Litak, G.; Li, X.

    2016-12-01

    We investigate the stochastic response of a noisy bistable fractional-order system when the fractional-order lies in the interval (0, 2]. We focus mainly on the stochastic P-bifurcation and the phenomenon of the stochastic resonance. We compare the generalized Euler algorithm and the predictor-corrector approach which are commonly used for numerical calculations of fractional-order nonlinear equations. Based on the predictor-corrector approach, the stochastic P-bifurcation and the stochastic resonance are investigated. Both the fractional-order value and the noise intensity can induce an stochastic P-bifurcation. The fractional-order may lead the stationary probability density function to turn from a single-peak mode to a double-peak mode. However, the noise intensity may transform the stationary probability density function from a double-peak mode to a single-peak mode. The stochastic resonance is investigated thoroughly, according to the linear and the nonlinear response theory. In the linear response theory, the optimal stochastic resonance may occur when the value of the fractional-order is larger than one. In previous works, the fractional-order is usually limited to the interval (0, 1]. Moreover, the stochastic resonance at the subharmonic frequency and the superharmonic frequency are investigated respectively, by using the nonlinear response theory. When it occurs at the subharmonic frequency, the resonance may be strong and cannot be ignored. When it occurs at the superharmonic frequency, the resonance is weak. We believe that the results in this paper might be useful for the signal processing of nonlinear systems.

  8. Combined Deterministic and Stochastic Approach to Determine Spatial Distribution of Drought Frequency and Duration in the Great Hungarian Plain

    NASA Astrophysics Data System (ADS)

    Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.

    2009-04-01

    Drought is one of the major weather driven natural hazards, which has most harm impacts on environment, agricultural and hydrological factors than the other hazards. In spite of the fact that Hungary - that country is situated in Central Europe - belongs to the continental climate zone (influenced by Atlantic and Mediterranean streams) and this weather conditions should be favourable for agricultural production, the drought is a serious risk factor in Hungary, especially on the so called "Great Hungarian Plain", which area has been hit by severe drought events. These drought events encouraged the Ministry of Environment and Water of Hungary to embark on a countrywide drought planning programme to coordinate drought planning efforts throughout the country, to ensure that available water is used efficiently and to provide guidance on how drought planning can be accomplished. With regard to this plan, it is indispensable to analyze the regional drought frequency and duration in the target region of the programme as fundamental information for the further works. According to these aims, first we initiated a methodological development for simulating drought in a non-contributing area. As a result of this work, it has been agreed that the most appropriate model structure for our purposes using a spatially distributed physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model embedded into a Markov Chain-Monte Carlo (MCMC) algorithm for estimate multi-year drought frequency and duration. In this framework: - the spatially distributed SVAT component simulates all the fundamental SVAT processes (such as: interception, snow-accumulation and melting, infiltration, water uptake by vegetation and evapotranspiration, vertical and horizontal distribution of soil moisture, etc.) taking the groundwater table as lower, and the hydrometeorological fields as upper boundary conditions into account; - and the MCMC based stochastic component generates time series of daily weather

  9. Ising computation based combinatorial optimization using spin-Hall effect (SHE) induced stochastic magnetization reversal

    NASA Astrophysics Data System (ADS)

    Shim, Yong; Jaiswal, Akhilesh; Roy, Kaushik

    2017-05-01

    Ising spin model is considered as an efficient computing method to solve combinatorial optimization problems based on its natural tendency of convergence towards low energy state. The underlying basic functions facilitating the Ising model can be categorized into two parts, "Annealing and Majority vote." In this paper, we propose an Ising cell based on Spin Hall Effect (SHE) induced magnetization switching in a Magnetic Tunnel Junction (MTJ). The stochasticity of our proposed Ising cell based on SHE induced MTJ switching can implement the natural annealing process by preventing the system from being stuck in solutions with local minima. Further, by controlling the current through the Heavy-Metal (HM) underlying the MTJ, we can mimic the majority vote function which determines the next state of the individual spins. By solving coupled Landau-Lifshitz-Gilbert equations, we demonstrate that our Ising cell can be replicated to map certain combinatorial problems. We present results for two representative problems—Maximum-cut and Graph coloring—to illustrate the feasibility of the proposed device-circuit configuration in solving combinatorial problems. Our proposed solution using a HM based MTJ device can be exploited to implement compact, fast, and energy efficient Ising spin model.

  10. A Markovian event-based framework for stochastic spiking neural networks.

    PubMed

    Touboul, Jonathan D; Faugeras, Olivier D

    2011-11-01

    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.

  11. A spin-based true random number generator exploiting the stochastic precessional switching of nanomagnets

    NASA Astrophysics Data System (ADS)

    Rangarajan, Nikhil; Parthasarathy, Arun; Rakheja, Shaloo

    2017-06-01

    In this paper, we propose a spin-based true random number generator (TRNG) that uses the inherent stochasticity in nanomagnets as the source of entropy. In contrast to previous works on spin-based TRNGs, we focus on the precessional switching strategy in nanomagnets to generate a truly random sequence. Using the NIST SP 800-22 test suite for randomness, we demonstrate that the output of the proposed TRNG circuit is statistically random with 99% confidence levels. The effects of process and temperature variability on the device are studied and shown to have no effect on the quality of randomness of the device. To benchmark the performance of the TRNG in terms of area, throughput, and power, we use SPICE (Simulation Program with Integrated Circuit Emphasis)-based models of the nanomagnet and combine them with CMOS device models at the 45 nm technology node. The throughput, power, and area footprints of the proposed TRNG are shown to be better than those of existing state-of-the-art TRNGs. We identify the optimal material and geometrical parameters of the nanomagnet to minimize the energy per bit at a given throughput of the TRNG circuit. Our results provide insights into the device-level modifications that can yield significant system-level improvements. Overall, the proposed spin-based TRNG circuit shows significant robustness, reliability, and fidelity and, therefore, has a potential for on-chip implementation.

  12. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.

  13. Impact of correlated magnetic noise on the detection of stochastic gravitational waves: Estimation based on a simple analytical model

    NASA Astrophysics Data System (ADS)

    Himemoto, Yoshiaki; Taruya, Atsushi

    2017-07-01

    After the first direct detection of gravitational waves (GW), detection of the stochastic background of GWs is an important next step, and the first GW event suggests that it is within the reach of the second-generation ground-based GW detectors. Such a GW signal is typically tiny and can be detected by cross-correlating the data from two spatially separated detectors if the detector noise is uncorrelated. It has been advocated, however, that the global magnetic fields in the Earth-ionosphere cavity produce the environmental disturbances at low-frequency bands, known as Schumann resonances, which potentially couple with GW detectors. In this paper, we present a simple analytical model to estimate its impact on the detection of stochastic GWs. The model crucially depends on the geometry of the detector pair through the directional coupling, and we investigate the basic properties of the correlated magnetic noise based on the analytic expressions. The model reproduces the major trend of the recently measured global correlation between the GW detectors via magnetometer. The estimated values of the impact of correlated noise also match those obtained from the measurement. Finally, we give an implication to the detection of stochastic GWs including upcoming detectors, KAGRA and LIGO India. The model suggests that LIGO Hanford-Virgo and Virgo-KAGRA pairs are possibly less sensitive to the correlated noise and can achieve a better sensitivity to the stochastic GW signal in the most pessimistic case.

  14. Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance

    NASA Astrophysics Data System (ADS)

    Kato, H.; Ito, K.

    2009-01-01

    A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.

  15. Hybrid stochastic ground motion modeling of the Mw 7.8 Gorkha, Nepal earthquake of 2015 based on InSAR inversion

    NASA Astrophysics Data System (ADS)

    Shen, Wenhao; Li, Yongsheng; Zhang, Jingfa

    2017-06-01

    We derive the coseismic slip distribution on a fault for the 2015, Mw 7.8 Gorkha earthquake based on ALOS-2 wide scan data and the inversion code, SDM (Steepest Descend Method). The results show that the maximum slip is 4.7 m, and the total released seismic moment is 6.02 × 1020 N m, equivalent to an earthquake of Mw ∼7.82. Static stress and slip heterogeneity analyses show that both the average stress drop and corner wavenumber are at a low level. Additionally, we model the observed impulsive behavior at the near-source KATNP station using a hybrid stochastic approach, which combines an analytical approach at low frequencies with a stochastic approach at high frequencies. The good agreement between the hybrid modeling and observed records reveals that the input parameters, such as stress drop or slip distribution, are suitable for the Gorkha earthquake. The success of the modeling indicates that, in addition to the smooth onset of STF (slip-rate time function), the low stress drop and low degree of slip heterogeneity are also responsible for the low level of high-frequency ground motion during the Gorkha earthquake.

  16. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    PubMed

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  17. Scalable estimation strategies based on stochastic approximations: Classical results and new insights.

    PubMed

    Airoldi, Edoardo M; Toulis, Panos

    2015-07-01

    Estimation with large amounts of data can be facilitated by stochastic gradient methods, in which model parameters are updated sequentially using small batches of data at each step. Here, we review early work and modern results that illustrate the statistical properties of these methods, including convergence rates, stability, and asymptotic bias and variance. We then overview modern applications where these methods are useful, ranging from an online version of the EM algorithm to deep learning. In light of these results, we argue that stochastic gradient methods are poised to become benchmark principled estimation procedures for large data sets, especially those in the family of stable proximal methods, such as implicit stochastic gradient descent.

  18. Identifying influential nodes in dynamic social networks based on degree-corrected stochastic block model

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun

    2016-05-01

    Many real-world data can be represented as dynamic networks which are the evolutionary networks with timestamps. Analyzing dynamic attributes is important to understanding the structures and functions of these complex networks. Especially, studying the influential nodes is significant to exploring and analyzing networks. In this paper, we propose a method to identify influential nodes in dynamic social networks based on identifying such nodes in the temporal communities which make up the dynamic networks. Firstly, we detect the community structures of all the snapshot networks based on the degree-corrected stochastic block model (DCBM). After getting the community structures, we capture the evolution of every community in the dynamic network by the extended Jaccard’s coefficient which is defined to map communities among all the snapshot networks. Then we obtain the initial influential nodes of the dynamic network and aggregate them based on three widely used centrality metrics. Experiments on real-world and synthetic datasets demonstrate that our method can identify influential nodes in dynamic networks accurately, at the same time, we also find some interesting phenomena and conclusions for those that have been validated in complex network or social science.

  19. Definition of efficient scarcity-based water pricing policies through stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.

    2015-09-01

    Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares River basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.

  20. Definition of efficient scarcity-based water pricing policies through stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.

    2015-01-01

    Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares river basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.

  1. A stochastic HMM-based forecasting model for fuzzy time series.

    PubMed

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to t