Stochastic Turing patterns: analysis of compartment-based approaches.
Cao, Yang; Erban, Radek
2014-12-01
Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.
Stochastic Functional Data Analysis: A Diffusion Model-based Approach
Zhu, Bin; Song, Peter X.-K.; Taylor, Jeremy M.G.
2011-01-01
Summary This paper presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov Chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate specific antigen data, where we also show how the models can be used for forecasting. PMID:21418053
Jiménez-Hernández, Hugo; González-Barbosa, Jose-Joel; Garcia-Ramírez, Teresa
2010-01-01
This investigation demonstrates an unsupervised approach for modeling traffic flow and detecting abnormal vehicle behaviors at intersections. In the first stage, the approach reveals and records the different states of the system. These states are the result of coding and grouping the historical motion of vehicles as long binary strings. In the second stage, using sequences of the recorded states, a stochastic graph model based on a Markovian approach is built. A behavior is labeled abnormal when current motion pattern cannot be recognized as any state of the system or a particular sequence of states cannot be parsed with the stochastic model. The approach is tested with several sequences of images acquired from a vehicular intersection where the traffic flow and duration used in connection with the traffic lights are continuously changed throughout the day. Finally, the low complexity and the flexibility of the approach make it reliable for use in real time systems. PMID:22163616
NASA Astrophysics Data System (ADS)
Laborde, S.; Calvi, A.
2012-10-01
This article describes some results of the study "DYNAMITED". The study is funded by the European Space Agency (ESA) and performed by a consortium of European industries and university, led by EADS Astrium Satellites. One of the main objectives of the study is to assess and quantify the uncertainty in the spacecraft sine vibration test data. For a number of reasons as for example robustness and confidence in the notching of the input spectra and validation of the finite element model, it is important to study the effect of the sources of uncertainty on the test data including the frequency response functions and the modal parameters. In particular the paper provides an overview on the estimation of the scatter on the spacecraft dynamic response due to identified sources of test uncertainties and the calculation of a "notched" sine test input spectrum based on a stochastic methodology. By means of Monte Carlo simulation, a stochastic cloud of the output of interest can be generated and this provides an estimate of the global error on the test results. The cloud is generated by characterizing the assumed sources of test uncertainties by parameters of the structure finite element model and by quantifying the scatter of the parameters. The uncertain parameters are the input random variables of the Monte Carlo simulation. Some results on the application of the methods to telecom spacecraft sine vibration tests are illustrated.
NASA Astrophysics Data System (ADS)
Beneldjouzi, Mohamed; Laouami, Nasser
2015-12-01
Building codes have widely considered the shear wave velocity to make a reliable subsoil seismic classification, based on the knowledge of the mechanical properties of material deposits down to bedrock. This approach has limitations because geophysical data are often very expensive to obtain. Recently, other alternatives have been proposed based on measurements of background noise and estimation of the H/V amplification curve. However, the use of this technique needs a regulatory framework before it can become a realistic site classification procedure. This paper proposes a new formulation for characterizing design sites in accordance with the Algerian seismic building code (RPA99/ver.2003), through transfer functions, by following a stochastic approach combined to a statistical study. For each soil type, the deterministic calculation of the average transfer function is performed over a wide sample of 1-D soil profiles, where the average shear wave (S-W) velocity, V s, in soil layers is simulated using random field theory. Average transfer functions are also used to calculate average site factors and normalized acceleration response spectra to highlight the amplification potential of each site type, since frequency content of the transfer function is significantly similar to that of the H/V amplification curve. Comparison is done with the RPA99/ver.2003 and Eurocode8 (EC8) design response spectra, respectively. In the absence of geophysical data, the proposed classification approach together with micro-tremor measures can be used toward a better soil classification.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Holistic irrigation water management approach based on stochastic soil water dynamics
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly
Sensitivity of Base-Isolated Systems to Ground Motion Characteristics: A Stochastic Approach
Kaya, Yavuz; Safak, Erdal
2008-07-08
Base isolators dissipate energy through their nonlinear behavior when subjected to earthquake-induced loads. A widely used base isolation system for structures involves installing lead-rubber bearings (LRB) at the foundation level. The force-deformation behavior of LRB isolators can be modeled by a bilinear hysteretic model. This paper investigates the effects of ground motion characteristics on the response of bilinear hysteretic oscillators by using a stochastic approach. Ground shaking is characterized by its power spectral density function (PSDF), which includes corner frequency, seismic moment, moment magnitude, and site effects as its parameters. The PSDF of the oscillator response is calculated by using the equivalent-linearization techniques of random vibration theory for hysteretic nonlinear systems. Knowing the PSDF of the response, we can calculate the mean square and the expected maximum response spectra for a range of natural periods and ductility values. The results show that moment magnitude is a critical factor determining the response. Site effects do not seem to have a significant influence.
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
NASA Astrophysics Data System (ADS)
Panu, U. S.; Ng, W.; Rasmussen, P. F.
2009-12-01
The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in
NASA Astrophysics Data System (ADS)
Wang, Y. Y.; Huang, G. H.; Wang, S.; Li, W.; Guan, P. B.
2016-08-01
In this study, a risk-based interactive multi-stage stochastic programming (RIMSP) approach is proposed through incorporating the fractile criterion method and chance-constrained programming within a multi-stage decision-making framework. RIMSP is able to deal with dual uncertainties expressed as random boundary intervals that exist in the objective function and constraints. Moreover, RIMSP is capable of reflecting dynamics of uncertainties, as well as the trade-off between the total net benefit and the associated risk. A water allocation problem is used to illustrate applicability of the proposed methodology. A set of decision alternatives with different combinations of risk levels applied to the objective function and constraints can be generated for planning the water resources allocation system. The results can help decision makers examine potential interactions between risks related to the stochastic objective function and constraints. Furthermore, a number of solutions can be obtained under different water policy scenarios, which are useful for decision makers to formulate an appropriate policy under uncertainty. The performance of RIMSP is analyzed and compared with an inexact multi-stage stochastic programming (IMSP) method. Results of comparison experiment indicate that RIMSP is able to provide more robust water management alternatives with less system risks in comparison with IMSP.
Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos
2015-01-01
Earthquake ground motion recordings are scarce in the central and eastern U.S. (CEUS) for large magnitude events and at close distances. We use two different simulation approaches, a deterministic physics-based model and a stochastic model, to simulate recordings from the 2011 Mineral, Virginia, 5.8 earthquake in the CEUS. We then use the 2001 Bhuj, India, 7.6 earthquake as a tectonic analog for a large CEUS earthquake and modify our simulations to develop models for generation of large magnitude earthquakes in the CEUS. Both models show a good fit to the observations from 0.1 to 10 Hz, and show a faster fall-off with distances beyond 500 km for the acceleration spectra compared to ground motion prediction models (GMPEs) for a 7.6 event.
Approach to Equilibrium for the Stochastic NLS
NASA Astrophysics Data System (ADS)
Lebowitz, J. L.; Mounaix, Ph.; Wang, W.-M.
2013-07-01
We study the approach to equilibrium, described by a Gibbs measure, for a system on a d-dimensional torus evolving according to a stochastic nonlinear Schrödinger equation (SNLS) with a high frequency truncation. We prove exponential approach to the truncated Gibbs measure both for the focusing and defocusing cases when the dynamics is constrained via suitable boundary conditions to regions of the Fourier space where the Hamiltonian is convex. Our method is based on establishing a spectral gap for the non self-adjoint Fokker-Planck operator governing the time evolution of the measure, which is uniform in the frequency truncation N. The limit N →∞ is discussed.
NASA Astrophysics Data System (ADS)
Kloss, S.; Schütze, N.; Walser, S.; Grundmann, J.
2012-04-01
In arid and semi-arid regions where water is scarce, farmers heavily rely on irrigation in order to grow crops and to produce agricultural commodities. The variable and often severely limited water supply thereby poses a serious challenge for farmers to cope with and demand sophisticated irrigation strategies that allow an efficient management of the available water resources. The general aim is to increase water productivity (WP) and one of these strategies to achieve this goal is controlled deficit irrigation (CDI). One way to realize CDI is by defining soil water status specific threshold values (either in soil tension or moisture) at which irrigation cycles are triggered. When utilizing CDI, irrigation control is of utmost importance and yet thresholds are likely chosen by trial and error and thus unreliable. Hence, for CDI to be effective systematic investigations for deriving reliable threshold values that account for different CDI strategies are needed. In this contribution, a method is presented that uses a simulation-based stochastic approach for estimating threshold values with a high reliability. The approach consist of a weather generator offering statistical significance to site-specific climate series, an optimization algorithm that determines optimal threshold values under limiting waters supply, and a crop model for simulating plant growth and water consumption. The study focuses on threshold values of soil tension for different CDI strategies. The advantage of soil-tension-based threshold values over soil-moisture-based lies in their universal and soil type independent applicability. The investigated CDI strategies comprised schedules of constant threshold values, crop development stage dependent threshold values, and different minimum irrigation intervals. For practical reasons, fixed irrigation schedules were tested as well. Additionally, a full irrigation schedule served as reference. The obtained threshold values were then tested in field
Master-equation approach to stochastic neurodynamics
NASA Astrophysics Data System (ADS)
Ohira, Toru; Cowan, Jack D.
1993-09-01
A master-equation approach to the stochastic neurodynamics proposed by Cowan [in Advances in Neural Information Processing Systems 3, edited by R. P. Lippman, J. E. Moody, and D. S. Touretzky (Morgan Kaufmann, San Mateo, 1991), p. 62] is investigated in this paper. We deal with a model neural network that is composed of two-state neurons obeying elementary stochastic transition rates. We show that such an approach yields concise expressions for multipoint moments and an equation of motion. We apply the formalism to a (1+1)-dimensional system. Exact and approximate expressions for various statistical parameters are obtained and compared with Monte Carlo simulations.
Stochastic approach based salient moving object detection using kernel density estimation
NASA Astrophysics Data System (ADS)
Tang, Peng; Liu, Zhifang; Gao, Lin; Sheng, Peng
2007-11-01
Background modeling techniques are important for object detection and tracking in video surveillances. Traditional background subtraction approaches are suffered from problems, such as persistent dynamic backgrounds, quick illumination changes, occlusions, noise etc. In this paper, we address the problem of detection and localization of moving objects in a video stream without apperception of background statistics. Three major contributions are presented. First, introducing the sequential Monte Carlo sampling techniques greatly reduce the computation complexity while compromise the expected accuracy. Second, the robust salient motion is considered when resampling the feature points by removing those who do not move in a relative constant velocity and emphasis those in consistent motion. Finally, the proposed joint feature model enforced spatial consistency. Promising results demonstrate the potentials of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Xiao, Jinghua; Yang, Yixian
2015-05-01
In this paper, exponential anti-synchronization in mean square of an uncertain memristor-based neural network is studied. The uncertain terms include non-modeled dynamics with boundary and stochastic perturbations. Based on the differential inclusions theory, linear matrix inequalities, Gronwall's inequality and adaptive control technique, an adaptive controller with update laws is developed to realize the exponential anti-synchronization. Adaptive controller can adjust itself behavior to get the best performance, according to the environment is changing or the environment has changed, which has the ability to adapt to environmental change. Furthermore, a numerical example is provided to validate the effectiveness of the proposed method.
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
A stochastic approach to model validation
NASA Astrophysics Data System (ADS)
Luis, Steven J.; McLaughlin, Dennis
This paper describes a stochastic approach for assessing the validity of environmental models. In order to illustrate basic concepts we focus on the problem of modeling moisture movement through an unsaturated porous medium. We assume that the modeling objective is to predict the mean distribution of moisture content over time and space. The mean moisture content describes the large-scale flow behavior of most interest in many practical applications. The model validation process attempts to determine whether the model's predictions are acceptably close to the mean. This can be accomplished by comparing small-scale measurements of moisture content to the model's predictions. Differences between these two quantities can be attributed to three distinct 'error sources': (1) measurement error, (2) spatial heterogeneity, and (3) model error. If we adopt appropriate stochastic descriptions for the first two sources of error we can view model validation as a hypothesis testing problem where the null hypothesis states that model error is negligible. We illustrate this concept by comparing the predictions of a simple two-dimensional deterministic model to measurements collected during a field experiment carried out near Las Cruces, New Mexico. Preliminary results from this field test indicate that a stochastic approach to validation can identify model deficiencies and provide objective standards for model performance.
NASA Astrophysics Data System (ADS)
Haruna, T.; Nakajima, K.
2013-06-01
The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.
Time-dependent stochastic Bethe-Salpeter approach
NASA Astrophysics Data System (ADS)
Rabani, Eran; Baer, Roi; Neuhauser, Daniel
2015-06-01
A time-dependent formulation for electron-hole excitations in extended finite systems, based on the Bethe-Salpeter equation (BSE), is developed using a stochastic wave function approach. The time-dependent formulation builds on the connection between time-dependent Hartree-Fock (TDHF) theory and the configuration-interaction with single substitution (CIS) method. This results in a time-dependent Schrödinger-like equation for the quasiparticle orbital dynamics based on an effective Hamiltonian containing direct Hartree
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management.
Stochastic bias in multidimensional excursion set approaches
NASA Astrophysics Data System (ADS)
Castorina, Emanuele; Sheth, Ravi K.
2013-08-01
We describe a simple fully analytic model of the excursion set approach associated with two Gaussian random walks: the first walk represents the initial overdensity around a protohalo, and the second is a crude way of allowing for other factors which might influence halo formation. This model is richer than that based on a single walk, because it yields a distribution of heights at first crossing. We provide explicit expressions for the unconditional first crossing distribution which is usually used to model the halo mass function, the progenitor distributions from which merger rates are usually estimated and the conditional distributions from which correlations with environment are usually estimated. These latter exhibit perhaps the simplest form of what is often called non-local bias, and which we prefer to call stochastic bias, since the new bias effects arise from `hidden variables' other than density, but these may still be defined locally. We provide explicit expressions for these new bias factors. We also provide formulae for the distribution of heights at first crossing in the unconditional and conditional cases. In contrast to the first crossing distribution, these are exact, even for moving barriers, and for walks with correlated steps. The conditional distributions yield predictions for the distribution of halo concentrations at fixed mass and formation redshift. They also exhibit assembly bias like effects, even when the steps in the walks themselves are uncorrelated. Our formulae show that without prior knowledge of the physical origin of the second walk, the naive estimate of the critical density required for halo formation which is based on the statistics of the first crossing distribution will be larger than that based on the statistical distribution of walk heights at first crossing; both will be biased low compared to the value associated with the physics. Finally, we show how the predictions are modified if we add the requirement that haloes form
Zhang, Chunmei; Li, Wenxue; Wang, Ke
2015-08-01
In this paper, a novel class of stochastic coupled systems with Lévy noise on networks (SCSLNNs) is presented. Both white noise and Lévy noise are considered in the networks. By exploiting graph theory and Lyapunov stability theory, criteria ensuring p th moment exponential stability and stability in probability of these SCSLNNs are established, respectively. These principles are closely related to the topology of the network and the perturbation intensity of white noise and Lévy noise. Moreover, to verify the theoretical results, stochastic coupled oscillators with Lévy noise on a network and stochastic Volterra predator-prey system with Lévy noise are performed. Finally, a numerical example about oscillators' network is provided to illustrate the feasibility of our analytical results.
Geometrically consistent approach to stochastic DBI inflation
Lorenz, Larissa; Martin, Jerome; Yokoyama, Jun'ichi
2010-07-15
Stochastic effects during inflation can be addressed by averaging the quantum inflaton field over Hubble-patch-sized domains. The averaged field then obeys a Langevin-type equation into which short-scale fluctuations enter as a noise term. We solve the Langevin equation for an inflaton field with a Dirac-Born-Infeld (DBI) kinetic term perturbatively in the noise and use the result to determine the field value's probability density function (PDF). In this calculation, both the shape of the potential and the warp factor are arbitrary functions, and the PDF is obtained with and without volume effects due to the finite size of the averaging domain. DBI kinetic terms typically arise in string-inspired inflationary scenarios in which the scalar field is associated with some distance within the (compact) extra dimensions. The inflaton's accessible range of field values therefore is limited because of the extra dimensions' finite size. We argue that in a consistent stochastic approach the inflaton's PDF must vanish for geometrically forbidden field values. We propose to implement these extra-dimensional spatial restrictions into the PDF by installing absorbing (or reflecting) walls at the respective boundaries in field space. As a toy model, we consider a DBI inflaton between two absorbing walls and use the method of images to determine its most general PDF. The resulting PDF is studied in detail for the example of a quartic warp factor and a chaotic inflaton potential. The presence of the walls is shown to affect the inflaton trajectory for a given set of parameters.
Two Different Approaches to Nonzero-Sum Stochastic Differential Games
Rainer, Catherine
2007-06-15
We make the link between two approaches to Nash equilibria for nonzero-sum stochastic differential games: the first one using backward stochastic differential equations and the second one using strategies with delay. We prove that, when both exist, the two notions of Nash equilibria coincide.
A stochastic collocation approach for efficient integrated gear health prognosis
NASA Astrophysics Data System (ADS)
Zhao, Fuqiong; Tian, Zhigang; Zeng, Yong
2013-08-01
Uncertainty quantification in damage growth is critical in equipment health prognosis and condition based maintenance. Integrated health prognostics has recently drawn growing attention due to its capability to produce more accurate predictions through integrating physical models and real-time condition monitoring data. In the existing literature, simulation is commonly used to account for the uncertainty in prognostics, which is inefficient. In this paper, instead of using simulation, a stochastic collocation approach is developed for efficient integrated gear health prognosis. Based on generalized polynomial chaos expansion, the approach is utilized to evaluate the uncertainty in gear remaining useful life prediction as well as the likelihood function in Bayesian inference. The collected condition monitoring data are incorporated into prognostics via Bayesian inference to update the distributions of uncertainties at given inspection times. Accordingly, the distribution of the remaining useful life is updated. Compared to conventional simulation methods, the stochastic collocation approach is much more efficient, and is capable of dealing with high dimensional probability space. An example is used to demonstrate the effectiveness and efficiency of the proposed approach.
The stochastic system approach for estimating dynamic treatments effect.
Commenges, Daniel; Gégout-Petit, Anne
2015-10-01
The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.
Functional integral approach for multiplicative stochastic processes.
Arenas, Zochil González; Barci, Daniel G
2010-05-01
We present a functional formalism to derive a generating functional for correlation functions of a multiplicative stochastic process represented by a Langevin equation. We deduce a path integral over a set of fermionic and bosonic variables without performing any time discretization. The usual prescriptions to define the Wiener integral appear in our formalism in the definition of Green's functions in the Grassman sector of the theory. We also study nonperturbative constraints imposed by Becchi, Rouet and Stora symmetry (BRS) and supersymmetry on correlation functions. We show that the specific prescription to define the stochastic process is wholly contained in tadpole diagrams. Therefore, in a supersymmetric theory, the stochastic process is uniquely defined since tadpole contributions cancels at all order of perturbation theory.
2012-05-30
crosshole seismic tomography and borehole logging information. Bayesian approaches [e.g., Gelman et al., 2003] have been applied to integrate diverse...simulation [e.g., Deutsch and Journel, 1998] with the added use of Bayesian formula [e.g., Chen et al., 2001; Gelman et al., 2003]. The Bayesian...3-D STOCHASTIC ESTIMATION OF POROSITY W05553 12 of 13 Gelman , A., J. B. Carlin, H. S. Stern, and D. B. Rubin (2003), Bayesian Data Analysis, 668 pp
NASA Astrophysics Data System (ADS)
González-Garcia, Javier; Jessell, Mark
2016-09-01
The Ruiz-Tolima Volcanic Massif (RTVM) is an active volcanic complex in the Northern Andes, and understanding its geological structure is critical for hazard mitigation and guiding future geothermal exploration. However, the sparsity of data available to constrain the interpretation of this volcanic system hinders the application of standard 3D modelling techniques. Furthermore, some features related to the volcanic system are not entirely understood, such as the connectivity between the plutons present in its basement (i.e. Manizales Stock, El Bosque Batholith). We have developed a methodology where two independent working hypotheses were formulated and modelled independently (i.e. a case where both plutons constitute distinct bodies, and an alternative case where they form one single batholith). A Monte Carlo approach was used to characterise the geological uncertainty in each case. Bézier curve design was used to represent geological contacts on input cross sections. Systematic variations in the control points of these curves allows us to generate multiple realisations of geological interfaces, resulting in stochastic models that were grouped into suites used to apply quantitative estimators of uncertainty. This process results in a geological representation based on fuzzy logic and in maps of model uncertainty distribution. The results are consistent with expected regions of high uncertainty near under-constrained geological contacts, while the non-unique nature of the conceptual model indicates that the dominant source of uncertainty in the area is the nature of the batholith structure.
NASA Astrophysics Data System (ADS)
Chiaradia, Maria T.; Guerriero, Luciano; Refice, Alberto; Pasquariello, Guido; Satalino, Giuseppe; Stramaglia, Sebastiano
1998-10-01
2D phase unwrapping, a problem common to signal processing, optics, and interferometric radar topographic applications, consists in retrieving an absolute phase field from principal, noisy measurements. In this paper, we analyze the application of neural networks to this complex mathematical problem, formulating it as a learning-by-examples strategy, by training a multilayer perceptron to associate a proper correction pattern to the principal phase gradient configuration on local window. In spite of the high dimensionality of this problem the proposed MLP, trained on examples from simulated phase surfaces, shows to be able to correctly remove more than half the original number of pointlike inconsistencies on real noisy interferograms. Better efficiencies could be achieved by enlarging the processing window size, so as to exploit a greater amount of information. By pushing further this change of perspective, one passes from a local to a global point of view; problems of this kind are more effectively solved, rather than through learning strategies, by minimization procedures, for which we prose a powerful algorithm, based on a stochastic approach.
Stochastic modelling of evaporation based on copulas
NASA Astrophysics Data System (ADS)
Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko
2015-04-01
Evapotranspiration is an important process in the water cycle that represents a considerable amount of moisture lost through evaporation from the soil surface and transpiration from plants in a watershed. Therefore, an accurate estimate of evapotranspiration rates is necessary, along with precipitation data, for running hydrological models. Often, daily reference evapotranspiration is modelled based on the Penman, Priestley-Taylor or Hargraeves equation. However, each of these models requires extensive input data, such as daily mean temperature, wind speed, relative humidity and solar radiation. Yet, in design studies, such data is unavailable in case stochastically generated time series of precipitation are used to force a hydrologic model. In the latter case, an alternative model approach is needed that allows for generating evapotranspiration data that are consistent with the accompanying precipitation data. This contribution presents such an approach in which the statistical dependence between evapotranspiration, temperature and precipitation is described by three- and four-dimensional vine copulas. Based on a case study of 72 years of evapotranspiration, temperature and precipitation data, observed in Uccle, Belgium, it was found that canonical vine copulas (C-Vines) in which bivariate Frank copulas are employed perform very well in preserving the dependencies between variables. While 4-dimensional C-Vine copulas performed best in simulating time series of evapotranspiration, a 3-dimensional C-Vine copula (relating evapotranspiration, daily precipitation depth and temperature) still allows for modelling evapotranspiration, though with larger error statistics.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
Stochastic Optimal Control and Linear Programming Approach
Buckdahn, R.; Goreac, D.; Quincampoix, M.
2011-04-15
We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.
Wildhaber, Mark L.; Dey, Rima; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.
2015-01-01
In managing fish populations, especially at-risk species, realistic mathematical models are needed to help predict population response to potential management actions in the context of environmental conditions and changing climate while effectively incorporating the stochastic nature of real world conditions. We provide a key component of such a model for the endangered pallid sturgeon (Scaphirhynchus albus) in the form of an individual-based bioenergetics model influenced not only by temperature but also by flow. This component is based on modification of a known individual-based bioenergetics model through incorporation of: the observed ontogenetic shift in pallid sturgeon diet from marcroinvertebrates to fish; the energetic costs of swimming under flowing-water conditions; and stochasticity. We provide an assessment of how differences in environmental conditions could potentially alter pallid sturgeon growth estimates, using observed temperature and velocity from channelized portions of the Lower Missouri River mainstem. We do this using separate relationships between the proportion of maximum consumption and fork length and swimming cost standard error estimates for fish captured above and below the Kansas River in the Lower Missouri River. Critical to our matching observed growth in the field with predicted growth based on observed environmental conditions was a two-step shift in diet from macroinvertebrates to fish.
Computational approaches to stochastic systems in physics and biology
NASA Astrophysics Data System (ADS)
Jeraldo Maldonado, Patricio Rodrigo
In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the
NASA Astrophysics Data System (ADS)
Coppola, Antonio; Comegna, Alessandro; Dragonetti, Giovanna; Lamaddalena, Nicola; Zdruli, Pandi
2013-04-01
modelling approaches have been developed at small space scales. Their extension to the applicative macroscale of the regional model is not a simple task mainly because of the heterogeneity of vadose zone properties, as well as of non-linearity of hydrological processes. Besides, one of the problems when applying distributed models is that spatial and temporal scales for data to be used as input in the models vary on a wide range of scales and are not always consistent with the model structure. Under these conditions, a strictly deterministic response to questions about the fate of a pollutant in the soil is impossible. At best, one may answer "this is the average behaviour within this uncertainty band". Consequently, the extension of these equations to account for regional-scale processes requires the uncertainties of the outputs be taken into account if the pollution vulnerability maps that may be drawn are to be used as agricultural management tools. A map generated without a corresponding map of associated uncertainties has no real utility. The stochastic stream tube approach is a frequently used to the water flux and solute transport through the vadose zone at applicative scales. This approach considers the field soil as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. The stream tubes approach is generally used in a probabilistic framework. Each stream tube defines local flow properties that are assumed to vary randomly between the different stream tubes. Thus, the approach allows average water and solute behaviour be described, along with the associated uncertainty bands. These stream tubes are usually considered to have parameters that are vertically homogeneous. This would be justified by the large difference between the horizontal and vertical extent of the spatial applicative scale. Vertical is generally overlooked. Obviously, all the model outputs are conditioned by this assumption. The latter, in turn, is more dictated by
Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach
Ma, Xiao; Dong, Jin; Djouadi, Seddik M; Nutaro, James J; Kuruganti, Teja
2015-01-01
The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.
Implications of a stochastic approach to air-quality regulations
Witten, A.J.; Kornegay, F.C.; Hunsaker, D.B. Jr.; Long, E.C. Jr.; Sharp, R.D.; Walsh, P.J.; Zeighami, E.A.; Gordon, J.S.; Lin, W.L.
1982-09-01
This study explores the viability of a stochastic approach to air quality regulations. The stochastic approach considered here is one which incorporates the variability which exists in sulfur dioxide (SO/sub 2/) emissions from coal-fired power plants. Emission variability arises from a combination of many factors including variability in the composition of as-received coal such as sulfur content, moisture content, ash content, and heating value, as well as variability which is introduced in power plant operations. The stochastic approach as conceived in this study addresses variability by taking the SO/sub 2/ emission rate to be a random variable with specified statistics. Given the statistical description of the emission rate and known meteorological conditions, it is possible to predict the probability of a facility exceeding a specified emission limit or violating an established air quality standard. This study also investigates the implications of accounting for emissions variability by allowing compliance to be interpreted as an allowable probability of occurrence of given events. For example, compliance with an emission limit could be defined as the probability of exceeding a specified emission value, such as 1.2 lbs SO/sub 2//MMBtu, being less than 1%. In contrast, compliance is currently taken to mean that this limit shall never be exceeded, i.e., no exceedance probability is allowed. The focus of this study is on the economic benefits offered to facilities through the greater flexibility of the stochastic approach as compared with possible changes in air quality and health effects which could result.
A probabilistic graphical model based stochastic input model construction
Wan, Jiang; Zabaras, Nicholas
2014-09-01
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.
A Spatial Clustering Approach for Stochastic Fracture Network Modelling
NASA Astrophysics Data System (ADS)
Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.
2014-07-01
Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach
Stochastic model updating utilizing Bayesian approach and Gaussian process model
NASA Astrophysics Data System (ADS)
Wan, Hua-Ping; Ren, Wei-Xin
2016-03-01
Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.
An approach to the residence time distribution for stochastic multi-compartment models.
Yu, Jihnhee; Wehrly, Thomas E
2004-10-01
Stochastic compartmental models are widely used in modeling processes such as drug kinetics in biological systems. This paper considers the distribution of the residence times for stochastic multi-compartment models, especially systems with non-exponential lifetime distributions. The paper first derives the moment generating function of the bivariate residence time distribution for the two-compartment model with general lifetimes and approximates the density of the residence time using the saddlepoint approximation. Then, it extends the distributional approach to the residence time for multi-compartment semi-Markov models combining the cofactor rule for a single destination and the analytic approach to the two-compartment model. This approach provides a complete specification of the residence time distribution based on the moment generating function and thus facilitates an easier calculation of high-order moments than the approach using the coefficient matrix. Applications to drug kinetics demonstrate the simplicity and usefulness of this approach.
Stochastic physical ecohydrologic-based model for estimating irrigation requirement
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Climate uncertainty affects both natural and managed hydrological systems. Therefore, methods which could take this kind of uncertainty into account are of primal importance for management of ecosystems, especially agricultural ecosystems. One of the famous problems in these ecosystems is crop water requirement estimation under climatic uncertainty. Both deterministic physically-based methods and stochastic time series modeling have been utilized in the literature. Like other fields of hydroclimatic sciences, there is a vast area in irrigation process modeling for developing approaches integrating physics of the process and statistics aspects. This study is about deriving closed-form expressions for probability density function (p.d.f.) of irrigation water requirement using a stochastic physically-based model, which considers important aspects of plant, soil, atmosphere and irrigation technique and policy in a coherent framework. An ecohydrologic stochastic model, building upon the stochastic differential equation of soil moisture dynamics at root zone, is employed as a basis for deriving the expressions considering temporal stochasticity of rainfall. Due to distinguished nature of stochastic processes of micro and traditional irrigation applications, two different methodologies have been used. Micro-irrigation application has been modeled through dichotomic process. Chapman-Kolomogrov equation of time integral of the dichotomic process for transient condition has been solved to derive analytical expressions for probability density function of seasonal irrigation requirement. For traditional irrigation, irrigation application during growing season has been modeled using a marked point process. Using the renewal theory, probability mass function of seasonal irrigation requirement, which is a discrete-value quantity, has been analytically derived. The methodology deals with estimation of statistical properties of the total water requirement in a growing season that
Barkhausen discontinuities and hysteresis of ferromagnetics: New stochastic approach
Vengrinovich, Valeriy
2014-02-18
The magnetization of ferromagnetic material is considered as periodically inhomogeneous Markov process. The theory assumes both statistically independent and correlated Barkhausen discontinuities. The model, based on the chain evolution-type process theory, assumes that the domain structure of a ferromagnet passes successively the steps of: linear growing, exponential acceleration and domains annihilation to zero density at magnetic saturation. The solution of stochastic differential Kolmogorov equation enables the hysteresis loop calculus.
Memristor-based neural networks: Synaptic versus neuronal stochasticity
NASA Astrophysics Data System (ADS)
Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil
2016-11-01
In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.
Neural Network-Based Solutions for Stochastic Optimal Control Using Path Integrals.
Rajagopal, Karthikeyan; Balakrishnan, Sivasubramanya Nadar; Busemeyer, Jerome R
2017-03-01
In this paper, an offline approximate dynamic programming approach using neural networks is proposed for solving a class of finite horizon stochastic optimal control problems. There are two approaches available in the literature, one based on stochastic maximum principle (SMP) formalism and the other based on solving the stochastic Hamilton-Jacobi-Bellman (HJB) equation. However, in the presence of noise, the SMP formalism becomes complex and results in having to solve a couple of backward stochastic differential equations. Hence, current solution methodologies typically ignore the noise effect. On the other hand, the inclusion of noise in the HJB framework is very straightforward. Furthermore, the stochastic HJB equation of a control-affine nonlinear stochastic system with a quadratic control cost function and an arbitrary state cost function can be formulated as a path integral (PI) problem. However, due to curse of dimensionality, it might not be possible to utilize the PI formulation for obtaining comprehensive solutions over the entire operating domain. A neural network structure called the adaptive critic design paradigm is used to effectively handle this difficulty. In this paper, a novel adaptive critic approach using the PI formulation is proposed for solving stochastic optimal control problems. The potential of the algorithm is demonstrated through simulation results from a couple of benchmark problems.
Nonlinear Aeroelastic Analysis of UAVs: Deterministic and Stochastic Approaches
NASA Astrophysics Data System (ADS)
Sukut, Thomas Woodrow
Aeroelastic aspects of unmanned aerial vehicles (UAVs) is analyzed by treatment of a typical section containing geometrical nonlinearities. Equations of motion are derived and numerical integration of these equations subject to quasi-steady aerodynamic forcing is performed. Model properties are tailored to a high-altitude long-endurance unmanned aircraft. Harmonic balance approximation is employed based on the steady-state oscillatory response of the aerodynamic forcing. Comparisons are made between time integration results and harmonic balance approximation. Close agreement between forcing and displacement oscillatory frequencies is found. Amplitude agreement is off by a considerable margin. Additionally, stochastic forcing effects are examined. Turbulent flow velocities generated from the von Karman spectrum are applied to the same nonlinear structural model. Similar qualitative behavior is found between quasi-steady and stochastic forcing models illustrating the importance of considering the non-steady nature of atmospheric turbulence when operating near critical flutter velocity.
NASA Astrophysics Data System (ADS)
Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris
2016-04-01
Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.
Stochastic pumping of heat: approaching the Carnot efficiency.
Segal, Dvira
2008-12-31
Random noise can generate a unidirectional heat current across asymmetric nano-objects in the absence (or against) a temperature gradient. We present a minimal model for a molecular-level stochastic heat pump that may operate arbitrarily close to the Carnot efficiency. The model consists a fluctuating molecular unit coupled to two solids characterized by distinct phonon spectral properties. Heat pumping persists for a broad range of system and bath parameters. Furthermore, by filtering the reservoirs' phonons the pump efficiency can approach the Carnot limit.
Analytic approaches to stochastic gene expression in multicellular systems.
Boettiger, Alistair Nicol
2013-12-17
Deterministic thermodynamic models of the complex systems, which control gene expression in metazoa, are helping researchers identify fundamental themes in the regulation of transcription. However, quantitative single cell studies are increasingly identifying regulatory mechanisms that control variability in expression. Such behaviors cannot be captured by deterministic models and are poorly suited to contemporary stochastic approaches that rely on continuum approximations, such as Langevin methods. Fortunately, theoretical advances in the modeling of transcription have assembled some general results that can be readily applied to systems being explored only through a deterministic approach. Here, I review some of the recent experimental evidence for the importance of genetically regulating stochastic effects during embryonic development and discuss key results from Markov theory that can be used to model this regulation. I then discuss several pairs of regulatory mechanisms recently investigated through a Markov approach. In each case, a deterministic treatment predicts no difference between the mechanisms, but the statistical treatment reveals the potential for substantially different distributions of transcriptional activity. In this light, features of gene regulation that seemed needlessly complex evolutionary baggage may be appreciated for their key contributions to reliability and precision of gene expression.
A simple approach for stochastic generation of spatial rainfall patterns
NASA Astrophysics Data System (ADS)
Tarpanelli, A.; Franchini, M.; Brocca, L.; Camici, S.; Melone, F.; Moramarco, T.
2012-11-01
SummaryRainfall scenarios are of considerable interest for design flood and flood risk analysis. To this end, the stochastic generation of continuous rainfall sequences is often coupled with the continuous hydrological modelling. In this context, the spatial and the temporal rainfall variability represents a significant issue, especially for basins in which the rainfall field cannot be approximated through the use of a single station. Therefore, methodologies for the spatially and temporally correlated rainfall generation are welcome. An example of such a methodology is the well-established Spatial-Temporal Neyman-Scott Rectangular Pulse (STNSRP), a modification of the single-site Neyman-Scott Rectangular Pulse (NSRP) approach, designed to incorporate specific features to reproduce the rainfall spatial cross-correlation. In order to provide a simple alternative to the STNSRP, a new method of generating synthetic rainfall time series with pre-set spatial-temporal correlation is proposed herein. This approach relies on the single-site NSRP model, which is used to generate synthetic hourly independent rainfall time series at each rain gauge station with the required temporal autocorrelation (and several other appropriately selected statistics). The rank correlation method of Iman and Conover (IC) is then applied to these synthetic rainfall time series in order to introduce the same spatial cross-correlation that exists between the observed time series. This combination of the NSRP model with the IC method consents the reproduction of the observed spatial-temporal variability of a rainfall field. In order to verify the proposed procedure, four sub-basins of the Upper Tiber River basin are investigated whose basin areas range from 165 km2 to 2040 km2. Results show that the procedure is able to preserve both the rainfall temporal autocorrelation at single site and the rainfall spatial cross-correlation at basin scale, and its performance is comparable with that of the
Approaching complexity by stochastic methods: From biological systems to turbulence
NASA Astrophysics Data System (ADS)
Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2011-09-01
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
A simple approach for stochastic generation of spatial rainfall patterns
NASA Astrophysics Data System (ADS)
Tarpanelli, Angelica; Franchini, Marco; Camici, Stefania; Brocca, Luca; Melone, Florisa; Moramarco, Tommaso
2010-05-01
The high floods occurred in the last years in many regions of the world have increased the interest of local, national and international authorities on the flood and risk assessment. In this context, the estimation of the design flood to be adopted represents a crucial factor, mainly for ungauged or poorly gauged catchments where sufficiently long discharge time series are missing. Due to the wider availability of rainfall data, rainfall-runoff models represent a possible tool to reduce the relevant uncertainty involved in the flood frequency analysis. Recently, new methodologies based on the stochastic generation of rainfall and temperature data have been proposed. The inferred information can be used as input for a continuous hydrological model to generate a synthetic time series of discharge and, hence, the flood frequency distribution at a given site. As far as the rainfall generation is concerned, for catchments of limited size, a single site model, as the Neyman-Scott Rectangular Pulses (NSRP), can be applied. It is characterized by a flexible structure in which the model parameters are broadly related to the underlying physical features observed in the rainfall field and the statistical properties of rainfall time series over a range of time scales are preserved. However, when larger catchments are considered, an extension into the two-dimensional space is required. This issue can be addressed by using the Spatial-Temporal Neyman-Scott Rectangular Pulses (STNSRP) model that, however, is not easy to be applied and requires a high computational effort. Therefore, simple techniques to obtain a spatial rainfall pattern starting from the more simple single-site NSRP are welcome. In this study, in order to take account of the spatial correlation that is needed when spatial rainfall patterns should be generated, the practical method of the rank correlation proposed by Iman and Conover (IC), was applied. The method is able to introduce a desired level of correlation
Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao
2007-01-01
A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen-Loève-based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen-Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two-dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.
Empirical likelihood-based tests for stochastic ordering
BARMI, HAMMOU EL; MCKEAGUE, IAN W.
2013-01-01
This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142
NASA Astrophysics Data System (ADS)
Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.
2017-01-01
Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.
Stochastic control approaches for sensor management in search and exploitation
NASA Astrophysics Data System (ADS)
Hitchings, Darin Chester
new lower bound on the performance of adaptive controllers in these scenarios, develop algorithms for computing solutions to this lower bound, and use these algorithms as part of a RH controller for sensor allocation in the presence of moving objects We also consider an adaptive Search problem where sensing actions are continuous and the underlying measurement space is also continuous. We extend our previous hierarchical decomposition approach based on performance bounds to this problem and develop novel implementations of Stochastic Dynamic Programming (SDP) techniques to solve this problem. Our algorithms are nearly two orders of magnitude faster than previously proposed approaches and yield solutions of comparable quality. For supervisory control, we discuss how human operators can work with and augment robotic teams performing these tasks. Our focus is on how tasks are partitioned among teams of robots and how a human operator can make intelligent decisions for task partitioning. We explore these questions through the design of a game that involves robot automata controlled by our algorithms and a human supervisor that partitions tasks based on different levels of support information. This game can be used with human subject experiments to explore the effect of information on quality of supervisory control.
Majorana approach to the stochastic theory of line shapes
NASA Astrophysics Data System (ADS)
Komijani, Yashar; Coleman, Piers
2016-08-01
Motivated by recent Mössbauer experiments on strongly correlated mixed-valence systems, we revisit the Kubo-Anderson stochastic theory of spectral line shapes. Using a Majorana representation for the nuclear spin we demonstrate how to recast the classic line-shape theory in a field-theoretic and diagrammatic language. We show that the leading contribution to the self-energy can reproduce most of the observed line-shape features including splitting and line-shape narrowing, while the vertex and the self-consistency corrections can be systematically included in the calculation. This approach permits us to predict the line shape produced by an arbitrary bulk charge fluctuation spectrum providing a model-independent way to extract the local charge fluctuation spectrum of the surrounding medium. We also derive an inverse formula to extract the charge fluctuation from the measured line shape.
The meso-structured magnetic atmosphere. A stochastic polarized radiative transfer approach
NASA Astrophysics Data System (ADS)
Carroll, T. A.; Kopf, M.
2007-06-01
We present a general radiative transfer model which allows the Zeeman diagnostics of complex and unresolved solar magnetic fields. Present modeling techniques still rely to a large extent on a-priori assumptions about the geometry of the underlying magnetic field. In an effort to obtain a more flexible and unbiased approach we pursue a rigorous statistical description of the underlying atmosphere. Based on a Markov random field model the atmospheric structures are characterized in terms of probability densities and spatial correlations. This approach allows us to derive a stochastic transport equation for polarized light valid in a regime with an arbitrary fluctuating magnetic field on finite scales. One of the key ingredients of the derived stochastic transfer equation is the correlation length which provides an additional degree of freedom to the transport equation and can be used as a diagnostic parameter to estimate the characteristic length scale of the underlying magnetic field. It is shown that the stochastic transfer equation represents a natural extension of the (polarized) line formation under the micro- and macroturbulent assumption and contains both approaches as limiting cases. In particular, we show how in an inhomogeneous atmosphere asymmetric Stokes profiles develop and that the correlation length directly controls the degree of asymmetry and net circular polarization (NCP). In a number of simple numerical model calculations we demonstrate the importance of a finite correlation length for the polarized line formation and its impact on the resulting Stokes line profiles. Appendices are only available in electronic form at http://www.aanda.org
A microprocessor-based multichannel subsensory stochastic resonance electrical stimulator.
Chang, Gwo-Ching
2013-01-01
Stochastic resonance electrical stimulation is a novel intervention which provides potential benefits for improving postural control ability in the elderly, those with diabetic neuropathy, and stroke patients. In this paper, a microprocessor-based subsensory white noise electrical stimulator for the applications of stochastic resonance stimulation is developed. The proposed stimulator provides four independent programmable stimulation channels with constant-current output, possesses linear voltage-to-current relationship, and has two types of stimulation modes, pulse amplitude and width modulation.
Stochastic light-cone CTMRG: a new DMRG approach to stochastic models
NASA Astrophysics Data System (ADS)
Kemper, A.; Gendiar, A.; Nishino, T.; Schadschneider, A.; Zittartz, J.
2003-01-01
We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 105 shows a considerable improvement on the old stochastic TMRG algorithm.
ENISI SDE: A New Web-Based Tool for Modeling Stochastic Processes.
Mei, Yongguo; Carbo, Adria; Hoops, Stefan; Hontecillas, Raquel; Bassaganya-Riera, Josep
2015-01-01
Modeling and simulations approaches have been widely used in computational biology, mathematics, bioinformatics and engineering to represent complex existing knowledge and to effectively generate novel hypotheses. While deterministic modeling strategies are widely used in computational biology, stochastic modeling techniques are not as popular due to a lack of user-friendly tools. This paper presents ENISI SDE, a novel web-based modeling tool with stochastic differential equations. ENISI SDE provides user-friendly web user interfaces to facilitate adoption by immunologists and computational biologists. This work provides three major contributions: (1) discussion of SDE as a generic approach for stochastic modeling in computational biology; (2) development of ENISI SDE, a web-based user-friendly SDE modeling tool that highly resembles regular ODE-based modeling; (3) applying ENISI SDE modeling tool through a use case for studying stochastic sources of cell heterogeneity in the context of CD4+ T cell differentiation. The CD4+ T cell differential ODE model has been published [8] and can be downloaded from biomodels.net. The case study reproduces a biological phenomenon that is not captured by the previously published ODE model and shows the effectiveness of SDE as a stochastic modeling approach in biology in general and immunology in particular and the power of ENISI SDE.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Chen, Guanrong; Ryashko, Lev
2013-10-01
In this paper, noise-induced destruction of self-sustained oscillations is studied for a stochastically-forced generator with hard excitement. The problem is to design a feedback regulator that can stabilize a limit cycle of the closed-loop system and to provide a required dispersion of the generated oscillations. The approach is based on the stochastic sensitivity function (SSF) technique and confidence domain method. A theory about the synthesis of assigned SSF is developed. For the case when this control problem is ill-posed, a regularization method is constructed. The effectiveness of the new method of confidence domain is demonstrated by stabilizing auto-oscillations in a randomly-forced generator with hard excitement.
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured
A stochastic approach to the solution of magnetohydrodynamic equations
Floriani, E.; Vilela Mendes, R.
2013-06-01
The construction of stochastic solutions is a powerful method to obtain localized solutions in configuration or Fourier space and for parallel computation with domain decomposition. Here a stochastic solution is obtained for the magnetohydrodynamics equations. Some details are given concerning the numerical implementation of the solution which is illustrated by an example of generation of long-range magnetic fields by a velocity source.
Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach.
Plehn, Thomas; May, Volkhard
2017-01-21
The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.
Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach
NASA Astrophysics Data System (ADS)
Plehn, Thomas; May, Volkhard
2017-01-01
The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.
Campillo, Fabien; Champagnat, Nicolas; Fritsch, Coralie
2016-12-01
We present two approaches to study invasion in growth-fragmentation-death models. The first one is based on a stochastic individual based model, which is a piecewise deterministic branching process with a continuum of types, and the second one is based on an integro-differential model. The invasion of the population is described by the survival probability for the former model and by an eigenproblem for the latter one. We study these two notions of invasion fitness, giving different characterizations of the growth of the population, and we make links between these two complementary points of view. In particular we prove that the two approaches lead to the same criterion of possible invasion. Based on Krein-Rutman theory, we also give a proof of the existence of a solution to the eigenproblem, which satisfies the conditions needed for our study of the stochastic model, hence providing a set of assumptions under which both approaches can be carried out. Finally, we motivate our work in the context of adaptive dynamics in a chemostat model.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations.
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments
NASA Astrophysics Data System (ADS)
Kang, Yuncheol; Prabhu, Vittal
The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.
NASA Astrophysics Data System (ADS)
Pool, Maria; Carrera, Jesus; Alcolea, Andres
2014-05-01
Inversion of the spatial variability of transmissivity (T) in groundwater models can be handled using either stochastic or deterministic (i.e., geology-based zonation) approaches. While stochastic methods predominate in scientific literature, they have never been formally compared to deterministic approaches, preferred by practitioners, for large aquifer models. We use both approaches to model groundwater flow and solute transport in the Mar del Plata aquifer, where seawater intrusion is a major threat to freshwater resources. The relative performance of the two approaches is evaluated in terms of model fits to head and concentration data (available for nearly a century), plausibility of the estimated T fields and their ability to predict transport. We also address the impact of using T data from large scale (i.e., pumping test) and small scale (i.e., specific capacity) on the calibration of this regional coastal aquifer. We find that stochastic models, based upon conditional estimation and simulation techniques, identify some of the geological features (river deposit channels) and yield better fits to calibration data than the much simpler geology-based deterministic model. However, the latter demonstrates much greater robustness for predicting sea water intrusion and for incorporating concentrations as calibration data. We conclude that qualitative geological information is extremely rich in identifying variability patterns and should be explicitly included in the calibration of stochastic models.
Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.
NASA Astrophysics Data System (ADS)
Carlen, Eric Anders
In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.
Stochastic population forecasts based on conditional expert opinions
Billari, F C; Graziani, R; Melilli, E
2012-01-01
The paper develops and applies an expert-based stochastic population forecasting method, which can also be used to obtain a probabilistic version of scenario-based official forecasts. The full probability distribution of population forecasts is specified by starting from expert opinions on the future development of demographic components. Expert opinions are elicited as conditional on the realization of scenarios, in a two-step (or multiple-step) fashion. The method is applied to develop a stochastic forecast for the Italian population, starting from official scenarios from the Italian National Statistical Office. PMID:22879704
An intensity-based stochastic model for terrestrial laser scanners
NASA Astrophysics Data System (ADS)
Wujanz, D.; Burger, M.; Mettenleiter, M.; Neitzel, F.
2017-03-01
Up until now no appropriate models have been proposed that are capable to describe the stochastic characteristics of reflectorless rangefinders - the key component of terrestrial laser scanners. This state has to be rated as unsatisfactory especially from the perception of Geodesy where comprehensive knowledge about the precision of measurements is of vital importance, for instance to weigh individual observations or to reveal outliers. In order to tackle this problem, a novel intensity-based stochastic model for the reflectorless rangefinder of a Zoller + Fröhlich Imager 5006 h is experimentally derived. This model accommodates the influence of the interaction between the emitted signal and object surface as well as the acquisition configuration on distance measurements. Based on two different experiments the stochastic model has been successfully verified for three chosen sampling rates.
Bogen, K T
2007-05-11
A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage
Stochastic Coloured Petrinet Based Healthcare Infrastructure Interdependency Model
NASA Astrophysics Data System (ADS)
Nukavarapu, Nivedita; Durbha, Surya
2016-06-01
The Healthcare Critical Infrastructure (HCI) protects all sectors of the society from hazards such as terrorism, infectious disease outbreaks, and natural disasters. HCI plays a significant role in response and recovery across all other sectors in the event of a natural or manmade disaster. However, for its continuity of operations and service delivery HCI is dependent on other interdependent Critical Infrastructures (CI) such as Communications, Electric Supply, Emergency Services, Transportation Systems, and Water Supply System. During a mass casualty due to disasters such as floods, a major challenge that arises for the HCI is to respond to the crisis in a timely manner in an uncertain and variable environment. To address this issue the HCI should be disaster prepared, by fully understanding the complexities and interdependencies that exist in a hospital, emergency department or emergency response event. Modelling and simulation of a disaster scenario with these complexities would help in training and providing an opportunity for all the stakeholders to work together in a coordinated response to a disaster. The paper would present interdependencies related to HCI based on Stochastic Coloured Petri Nets (SCPN) modelling and simulation approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The entire model would be integrated with Geographic information based decision support system to visualize the dynamic behaviour of the interdependency of the Healthcare and related CI network in a geographically based environment.
Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks
2012-01-01
Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs). As a logical model, probabilistic Boolean networks (PBNs) consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n) or O(nN2n) for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN). An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n), where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a network inferred from a T
A benders decomposition approach to multiarea stochastic distributed utility planning
NASA Astrophysics Data System (ADS)
McCusker, Susan Ann
Until recently, small, modular generation and storage options---distributed resources (DRs)---have been installed principally in areas too remote for economic power grid connection and sensitive applications requiring backup capacity. Recent regulatory changes and DR advances, however, have lead utilities to reconsider the role of DRs. To a utility facing distribution capacity bottlenecks or uncertain load growth, DRs can be particularly valuable since they can be dispersed throughout the system and constructed relatively quickly. DR value is determined by comparing its costs to avoided central generation expenses (i.e., marginal costs) and distribution investments. This requires a comprehensive central and local planning and production model, since central system marginal costs result from system interactions over space and time. This dissertation develops and applies an iterative generalized Benders decomposition approach to coordinate models for optimal DR evaluation. Three coordinated models exchange investment, net power demand, and avoided cost information to minimize overall expansion costs. Local investment and production decisions are made by a local mixed integer linear program. Central system investment decisions are made by a LP, and production costs are estimated by a stochastic multi-area production costing model with Kirchhoff's Voltage and Current Law constraints. The nested decomposition is a new and unique method for distributed utility planning that partitions the variables twice to separate local and central investment and production variables, and provides upper and lower bounds on expected expansion costs. Kirchhoff's Voltage Law imposes nonlinear, nonconvex constraints that preclude use of LP if transmission capacity is available in a looped transmission system. This dissertation develops KVL constraint approximations that permit the nested decomposition to consider new transmission resources, while maintaining linearity in the three
Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo
2010-11-01
Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity
Multi-objective reliability-based optimization with stochastic metamodels.
Coelho, Rajan Filomeno; Bouillard, Philippe
2011-01-01
This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems--for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation--the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.
Bieda, Bogusław
2013-01-01
The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design.
Revisiting the Cape Cod Bacteria Injection Experiment Using a Stochastic Modeling Approach
Maxwell, R M; Welty, C; Harvey, R W
2006-11-22
Bromide and resting-cell bacteria tracer tests carried out in a sand and gravel aquifer at the USGS Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach and Lagrangian particle tracking numerical methods. Bacteria transport was strongly coupled to colloid filtration through functional dependence of local-scale colloid transport parameters on hydraulic conductivity and seepage velocity in a stochastic advection-dispersion/attachment-detachment model. Information on geostatistical characterization of the hydraulic conductivity (K) field from a nearby plot was utilized as input that was unavailable when the original analysis was carried out. A finite difference model for groundwater flow and a particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data using the aforementioned geostatistical parameters. An optimization routine was utilized to adjust the mean and variance of the lnK field over 100 realizations such that a best fit of a simulated, average bromide breakthrough curve is achieved. Once the optimal bromide fit was accomplished (based on adjusting the lnK statistical parameters in unconditional simulations), a stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of the mean bacteria breakthrough data were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech [1] equation for estimating single collector efficiency were compared to those using the Rajagopalan and Tien [2] model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions, with the Rajagopalan and Tien model yielding approximately a 30% lower peak concentration and less tailing than the Tufenkji and Elimelech formulation. Simulations using a distribution
Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach
ERIC Educational Resources Information Center
Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro
2005-01-01
Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…
NASA Astrophysics Data System (ADS)
Pool, M.; Carrera, J.; Alcolea, A.; Bocanegra, E. M.
2015-12-01
Inversion of the spatial variability of transmissivity (T) in groundwater models can be handled using either stochastic or deterministic (i.e., geology-based zonation) approaches. While stochastic methods predominate in scientific literature, they have never been formally compared to deterministic approaches, preferred by practitioners, for regional aquifer models. We use both approaches to model groundwater flow and solute transport in the Mar del Plata aquifer, where seawater intrusion is a major threat to freshwater resources. The relative performance of the two approaches is evaluated in terms of (i) model fits to head and concentration data (available for nearly a century), (ii) geological plausibility of the estimated T fields, and (iii) their ability to predict transport. We also address the impact of conditioning the estimated fields on T data coming from either pumping tests interpreted with the Theis method or specific capacity values from step-drawdown tests. We find that stochastic models, based upon conditional estimation and simulation techniques, identify some of the geological features (river deposit channels and low transmissivity regions associated to quartzite outcrops) and yield better fits to calibration data than the much simpler geology-based deterministic model, which cannot properly address model structure uncertainty. However, the latter demonstrates much greater robustness for predicting sea water intrusion and for incorporating concentrations as calibration data. We attribute the poor performance, and underestimated uncertainty, of the stochastic simulations to estimation bias introduced by model errors. Qualitative geological information is extremely rich in identifying large-scale variability patterns, which are identified by stochastic models only in data rich areas, and should be explicitly included in the calibration process.
Inversion of Robin coefficient by a spectral stochastic finite element approach
Jin Bangti Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
NASA Astrophysics Data System (ADS)
Bansal, Manik; Singh, I. V.; Mishra, B. K.; Sharma, Kamal; Khan, I. A.
2017-04-01
A stochastic XFEM model based on microstructural observations has been developed to evaluate the tensile strength of NBG-18 nuclear graphite. The nuclear graphite consists of pitch matrix, filler particles, pores and micro-cracks. The numerical simulations are performed at two length scales due to large difference in average size of filler particles and pores. Both deterministic and stochastic approaches have been implemented. The study intends to illustrate the variation in tensile strength due to heterogeneities modeled stochastically. The properties of pitch matrix and filler particles are assumed to be known at the constituent level. The material models for both pitch and fillers are assumed to be linear elastic. The stochastic size and spatial distribution of the pores and filler particles has been modeled during the micro and macro analysis respectively. The strength of equivalent porous pitch matrix evaluated at micro level has been distributed stochastically in the elemental domain along with filler particles for macro analysis. The effect of micro-cracks has been incorporated indirectly by considering fracture plane in each filler particle. Tensile strength of nuclear graphite is obtained by performing the simulations at macro-level. Statistical parameters evaluated using numerical tensile strength data agree well with experimentally obtained statistical parameters available in the literature.
Heydari, M.H.; Hooshmandasl, M.R.; Maalek Ghaini, F.M.; Cattani, C.
2014-08-01
In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show the accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.
NASA Astrophysics Data System (ADS)
Miller, Michael I.; Roysam, Badrinath; Smith, Kurt R.
1988-10-01
Essential to the solution of ill posed problems in vision and image processing is the need to use object constraints in the reconstruction. While Bayesian methods have shown the greatest promise, a fundamental difficulty has persisted in that many of the available constraints are in the form of deterministic rules rather than as probability distributions and are thus not readily incorporated as Bayesian priors. In this paper, we propose a general method for mapping a large class of rule-based constraints to their equivalent stochastic Gibbs' distribution representation. This mapping allows us to solve stochastic estimation problems over rule-generated constraint spaces within a Bayesian framework. As part of this approach we derive a method based on Langevin's stochastic differential equation and a regularization technique based on the classical autologistic transfer function that allows us to update every site simultaneously regardless of the neighbourhood structure. This allows us to implement a completely parallel method for generating the constraint sets corresponding to the regular grammar languages on massively parallel networks. We illustrate these ideas by formulating the image reconstruction problem based on a hierarchy of rule-based and stochastic constraints, and derive a fully parallelestimator structure. We also present results computed on the AMT DAP500 massively parallel digital computer, a mesh-connected 32x32 array of processing elements which are configured in a Single-Instruction, Multiple Data stream architecture.
An integrated fuzzy-stochastic modeling approach for risk assessment of groundwater contamination.
Li, Jianbing; Huang, Gordon H; Zeng, Guangming; Maqsood, Imran; Huang, Yuefei
2007-01-01
An integrated fuzzy-stochastic risk assessment (IFSRA) approach was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with site conditions, environmental guidelines, and health impact criteria. The contaminant concentrations in groundwater predicted from a numerical model were associated with probabilistic uncertainties due to the randomness in modeling input parameters, while the consequences of contaminant concentrations violating relevant environmental quality guidelines and health evaluation criteria were linked with fuzzy uncertainties. The contaminant of interest in this study was xylene. The environmental quality guideline was divided into three different strictness categories: "loose", "medium" and "strict". The environmental-guideline-based risk (ER) and health risk (HR) due to xylene ingestion were systematically examined to obtain the general risk levels through a fuzzy rule base. The ER and HR risk levels were divided into five categories of "low", "low-to-medium", "medium", "medium-to-high" and "high", respectively. The general risk levels included six categories ranging from "low" to "very high". The fuzzy membership functions of the related fuzzy events and the fuzzy rule base were established based on a questionnaire survey. Thus the IFSRA integrated fuzzy logic, expert involvement, and stochastic simulation within a general framework. The robustness of the modeling processes was enhanced through the effective reflection of the two types of uncertainties as compared with the conventional risk assessment approaches. The developed IFSRA was applied to a petroleum-contaminated groundwater system in western Canada. Three scenarios with different environmental quality guidelines were analyzed, and reasonable results were obtained. The risk assessment approach developed in this study offers a unique tool for systematically quantifying various uncertainties in contaminated site management, and it also
NASA Astrophysics Data System (ADS)
McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.
2017-03-01
We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.
Vibrational deactivation of a highly excited diatomic - a stochastic approach
NASA Astrophysics Data System (ADS)
Sceats, Mark G.
1988-10-01
A formula for the average energy transfer from a highly excited Morse oscillator is derived from linear coupling stochastic theory. The results are in reasonable agreement with the simulations of Nesbitt and Hynes for I 2 in He, Ar and Xe, and can be improved over the entire oscillator energy range by including the Kelley-Wolfsberg kinematic factor to account for non-linear coupling at low oscillator energies.
NASA Astrophysics Data System (ADS)
Dean, D. W.; Illangasekare, T. H.; Turner, A.; Russell, T. F.
2004-12-01
Modeling of the complex behavior of DNAPLs in naturally heterogeneous subsurface formations poses many challenges. Even though considerable progress have been made in developing improved numerical schemes to solve the governing partial differential equations, most of these methods still rely on deterministic description of the processes. This research explores the use of stochastic differential equations to model multiphase flow in heterogeneous aquifers, specifically the flow of DNAPLs in saturated soils. The models developed are evaluated using experimental data generated in two-dimensional test systems. A fundamental assumption used in the model formulation is that the movement of a fluid particle in each phase is described by a stochastic process and that the positions of all fluid particles over time are governed by a specific law. It is this law, which we seek to determine. The approach results in a nonlinear stochastic differential equation describing the position of the non-wetting phase fluid particle. The nonlinearity in the stochastic differential equation arises because both the drift and diffusion coefficients depend on the volumetric fraction of the phase, which in turn depends on the position of the fluid particles in the problem domain. The concept of a fluid particle is central to the development of the proposed model. Expressions for both saturation and volumetric fraction are developed using this concept of fluid particle. Darcy's law and the continuity equation are used to derive a Fokker-Planck equation governing flow. The Ito calculus is then applied to derive a stochastic differential equation(SDE) for the non-wetting phase. This SDE has both drift and diffusion terms which depend on the volumetric fraction of the non-wetting phase. Standard stochastic theories based on the Ito calculus and the Wiener process and the equivalent Fokker-Planck PDE's are typically used to model diffusion processes. However, these models, in their usual form
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks
NASA Astrophysics Data System (ADS)
Bottacin-Busolin, A.; Worman, A. L.
2013-12-01
A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance
A stochastic approach to the hadron spectrum. III
Aron, J.C.
1986-12-01
The connection with the quarks of the stochastic model proposed in the two preceding papers is studied; the slopes of the baryon trajectories are calculated with reference to the quarks. Suggestions are made for the interpretation of the model (quadratic or linear addition of the contributions to the mass, dependence of the decay on the quantum numbers of the hadrons involved, etc.) and concerning its link with the quarkonium model, which describes the mesons with charm or beauty. The controversial question of the ''subquantum level'' is examined.
Modified stochastic variational approach to non-Hermitian quantum systems
NASA Astrophysics Data System (ADS)
Kraft, Daniel; Plessas, Willibald
2016-08-01
The stochastic variational method has proven to be a very efficient and accurate tool to calculate especially bound states of quantum-mechanical few-body systems. It relies on the Rayleigh-Ritz variational principle for minimizing real eigenenergies of Hermitian Hamiltonians. From molecular to atomic, nuclear, and particle physics there is actually a great demand of describing also resonant states to a high degree of reliance. This is especially true with regard to hadron resonances, which have to be treated in a relativistic framework. So far standard methods of dealing with quantum chromodynamics have not yet succeeded in describing hadron resonances in a realistic manner. Resonant states can be handled by non-Hermitian quantum Hamiltonians. These states correspond to poles in the lower half of the unphysical sheet of the complex energy plane and are therefore intimately connected with complex eigenvalues. Consequently the Rayleigh-Ritz variational principle cannot be employed in the usual manner. We have studied alternative selection principles for the choice of test functions to treat resonances along the stochastic variational method. We have found that a stationarity principle for the complex energy eigenvalues provides a viable method for selecting test functions for resonant states in a constructive manner. We discuss several variants thereof and exemplify their practical efficiencies.
Intervention-Based Stochastic Disease Eradication
NASA Astrophysics Data System (ADS)
Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira
2013-03-01
Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204
A stochastic control approach to Slotted-ALOHA random access protocol
NASA Astrophysics Data System (ADS)
Pietrabissa, Antonio
2013-12-01
ALOHA random access protocols are distributed protocols based on transmission probabilities, that is, each node decides upon packet transmissions according to a transmission probability value. In the literature, ALOHA protocols are analysed by giving necessary and sufficient conditions for the stability of the queues of the node buffers under a control vector (whose elements are the transmission probabilities assigned to the nodes), given an arrival rate vector (whose elements represent the rates of the packets arriving in the node buffers). The innovation of this work is that, given an arrival rate vector, it computes the optimal control vector by defining and solving a stochastic control problem aimed at maximising the overall transmission efficiency, while keeping a grade of fairness among the nodes. Furthermore, a more general case in which the arrival rate vector changes in time is considered. The increased efficiency of the proposed solution with respect to the standard ALOHA approach is evaluated by means of numerical simulations.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
Integral-based event triggering controller design for stochastic LTI systems via convex optimisation
NASA Astrophysics Data System (ADS)
Mousavi, S. H.; Marquez, H. J.
2016-07-01
The presence of measurement noise in the event-based systems can lower system efficiency both in terms of data exchange rate and performance. In this paper, an integral-based event triggering control system is proposed for LTI systems with stochastic measurement noise. We show that the new mechanism is robust against noise and effectively reduces the flow of communication between plant and controller, and also improves output performance. Using a Lyapunov approach, stability in the mean square sense is proved. A simulated example illustrates the properties of our approach.
Intervention-Based Stochastic Disease Eradication
Billings, Lora; Mier-y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira B.
2013-01-01
Disease control is of paramount importance in public health, with infectious disease extinction as the ultimate goal. Although diseases may go extinct due to random loss of effective contacts where the infection is transmitted to new susceptible individuals, the time to extinction in the absence of control may be prohibitively long. Intervention controls are typically defined on a deterministic schedule. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and rate of infection spread. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. PMID:23940548
Stochastic Modeling Approach to the Incubation Time of Prionic Diseases
NASA Astrophysics Data System (ADS)
Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.
2003-05-01
Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.
Stochastic queueing-theory approach to human dynamics
NASA Astrophysics Data System (ADS)
Walraevens, Joris; Demoor, Thomas; Maertens, Tom; Bruneel, Herwig
2012-02-01
Recently, numerous studies have shown that human dynamics cannot be described accurately by exponential laws. For instance, Barabási [Nature (London)NATUAS0028-083610.1038/nature03459 435, 207 (2005)] demonstrates that waiting times of tasks to be performed by a human are more suitably modeled by power laws. He presumes that these power laws are caused by a priority selection mechanism among the tasks. Priority models are well-developed in queueing theory (e.g., for telecommunication applications), and this paper demonstrates the (quasi-)immediate applicability of such a stochastic priority model to human dynamics. By calculating generating functions and by studying them in their dominant singularity, we prove that nonexponential tails result naturally. Contrary to popular belief, however, these are not necessarily triggered by the priority selection mechanism.
Two-state approach to stochastic hair bundle dynamics
NASA Astrophysics Data System (ADS)
Clausznitzer, Diana; Lindner, Benjamin; Jülicher, Frank; Martin, Pascal
2008-04-01
Hair cells perform the mechanoelectrical transduction of sound signals in the auditory and vestibular systems of vertebrates. The part of the hair cell essential for this transduction is the so-called hair bundle. In vitro experiments on hair cells from the sacculus of the American bullfrog have shown that the hair bundle comprises active elements capable of producing periodic deflections like a relaxation oscillator. Recently, a continuous nonlinear stochastic model of the hair bundle motion [Nadrowski , Proc. Natl. Acad. Sci. U.S.A. 101, 12195 (2004)] has been shown to reproduce the experimental data in stochastic simulations faithfully. Here, we demonstrate that a binary filtering of the hair bundle's deflection (experimental data and continuous hair bundle model) does not change significantly the spectral statistics of the spontaneous as well as the periodically driven hair bundle motion. We map the continuous hair bundle model to the FitzHugh-Nagumo model of neural excitability and discuss the bifurcations between different regimes of the system in terms of the latter model. Linearizing the nullclines and assuming perfect time-scale separation between the variables we can map the FitzHugh-Nagumo system to a simple two-state model in which each of the states corresponds to the two possible values of the binary-filtered hair bundle trajectory. For the two-state model, analytical expressions for the power spectrum and the susceptibility can be calculated [Lindner and Schimansky-Geier, Phys. Rev. E 61, 6103 (2000)] and show the same features as seen in the experimental data as well as in simulations of the continuous hair bundle model.
Application of Stochastic and Deterministic Approaches to Modeling Interstellar Chemistry
NASA Astrophysics Data System (ADS)
Pei, Yezhe
This work is about simulations of interstellar chemistry using the deterministic rate equation (RE) method and the stochastic moment equation (ME) method. Primordial metal-poor interstellar medium (ISM) is of our interest and the socalled “Population-II” stars could have been formed in this environment during the “Epoch of Reionization” in the baby universe. We build a gas phase model using the RE scheme to describe the ionization-powered interstellar chemistry. We demonstrate that OH replaces CO as the most abundant metal-bearing molecule in such interstellar clouds of the early universe. Grain surface reactions play an important role in the studies of astrochemistry. But the lack of an accurate yet effective simulation method still presents a challenge, especially for large, practical gas-grain system. We develop a hybrid scheme of moment equations and rate equations (HMR) for large gas-grain network to model astrochemical reactions in the interstellar clouds. Specifically, we have used a large chemical gas-grain model, with stochastic moment equations to treat the surface chemistry and deterministic rate equations to treat the gas phase chemistry, to simulate astrochemical systems as of the ISM in the Milky Way, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). We compare the results to those of pure rate equations and modified rate equations and present a discussion about how moment equations improve our theoretical modeling and how the abundances of the assorted species are changed by varied metallicity. We also model the observed composition of H2O, CO and CO2 ices toward Young Stellar Objects in the LMC and show that the HMR method gives a better match to the observation than the pure RE method.
NASA Astrophysics Data System (ADS)
Loch-Dehbi, S.; Dehbi, Y.; Gröger, G.; Plümer, L.
2016-10-01
This paper introduces a novel method for the automatic derivation of building floorplans and indoor models. Our approach is based on a logical and stochastic reasoning using sparse observations such as building room areas. No further sensor observations like 3D point clouds are needed. Our method benefits from an extensive prior knowledge of functional dependencies and probability density functions of shape and location parameters of rooms depending on their functional use. The determination of posterior beliefs is performed using Bayesian Networks. Stochastic reasoning is complex since the problem is characterized by a mixture of discrete and continuous parameters that are in turn correlated by non-linear constraints. To cope with this kind of complexity, the proposed reasoner combines statistical methods with constraint propagation. It generates a limited number of hypotheses in a model-based top-down approach. It predicts floorplans based on a-priori localised windows. The use of Gaussian mixture models, constraint solvers and stochastic models helps to cope with the a-priori infinite space of the possible floorplan instantiations.
NASA Astrophysics Data System (ADS)
Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.
2017-03-01
In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.
Broadband seismic monitoring of active volcanoes using deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Kumagai, H.; Nakano, M.; Maeda, T.; Yepes, H.; Palacios, P.; Ruiz, M. C.; Arrais, S.; Vaca, M.; Molina, I.; Yamashina, T.
2009-12-01
We systematically used two approaches to analyze broadband seismic signals observed at active volcanoes: one is waveform inversion of very-long-period (VLP) signals in the frequency domain assuming possible source mechanisms; the other is a source location method of long-period (LP) and tremor using their amplitudes. The deterministic approach of the waveform inversion is useful to constrain the source mechanism and location, but is basically only applicable to VLP signals with periods longer than a few seconds. The source location method uses seismic amplitudes corrected for site amplifications and assumes isotropic radiation of S waves. This assumption of isotropic radiation is apparently inconsistent with the hypothesis of crack geometry at the LP source. Using the source location method, we estimated the best-fit source location of a VLP/LP event at Cotopaxi using a frequency band of 7-12 Hz and Q = 60. This location was close to the best-fit source location determined by waveform inversion of the VLP/LP event using a VLP band of 5-12.5 s. The waveform inversion indicated that a crack mechanism better explained the VLP signals than an isotropic mechanism. These results indicated that isotropic radiation is not inherent to the source and only appears at high frequencies. We also obtained a best-fit location of an explosion event at Tungurahua when using a frequency band of 5-10 Hz and Q = 60. This frequency band and Q value also yielded reasonable locations for the sources of tremor signals associated with lahars and pyroclastic flows at Tungurahua. The isotropic radiation assumption may be valid in a high frequency range in which the path effect caused by the scattering of seismic waves results in an isotropic radiation pattern of S waves. The source location method may be categorized as a stochastic approach based on the nature of scattering waves. We further applied the waveform inversion to VLP signals observed at only two stations during a volcanic crisis
Network capacity with probit-based stochastic user equilibrium problem
Lu, Lili; Wang, Jian; Zheng, Pengjun; Wang, Wei
2017-01-01
Among different stochastic user equilibrium (SUE) traffic assignment models, the Logit-based stochastic user equilibrium (SUE) is extensively investigated by researchers. It is constantly formulated as the low-level problem to describe the drivers’ route choice behavior in bi-level problems such as network design, toll optimization et al. The Probit-based SUE model receives far less attention compared with Logit-based model albeit the assignment result is more consistent with drivers’ behavior. It is well-known that due to the identical and irrelevant alternative (IIA) assumption, the Logit-based SUE model is incapable to deal with route overlapping problem and cannot account for perception variance with respect to trips. This paper aims to explore the network capacity with Probit-based traffic assignment model and investigate the differences of it is with Logit-based SUE traffic assignment models. The network capacity is formulated as a bi-level programming where the up-level program is to maximize the network capacity through optimizing input parameters (O-D multiplies and signal splits) while the low-level program is the Logit-based or Probit-based SUE problem formulated to model the drivers’ route choice. A heuristic algorithm based on sensitivity analysis of SUE problem is detailed presented to solve the proposed bi-level program. Three numerical example networks are used to discuss the differences of network capacity between Logit-based SUE constraint and Probit-based SUE constraint. This study finds that while the network capacity show different results between Probit-based SUE and Logit-based SUE constraints, the variation pattern of network capacity with respect to increased level of travelers’ information for general network under the two type of SUE problems is the same, and with certain level of travelers’ information, both of them can achieve the same maximum network capacity. PMID:28178284
A stochastic process approach of the drake equation parameters
NASA Astrophysics Data System (ADS)
Glade, Nicolas; Ballet, Pascal; Bastien, Olivier
2012-04-01
The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2014-10-07
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
Stochastic multiscale modelling of cortical bone elasticity based on high-resolution imaging.
Sansalone, Vittorio; Gagliardi, Davide; Desceliers, Christophe; Bousson, Valérie; Laredo, Jean-Denis; Peyrin, Françoise; Haïat, Guillaume; Naili, Salah
2016-02-01
Accurate and reliable assessment of bone quality requires predictive methods which could probe bone microstructure and provide information on bone mechanical properties. Multiscale modelling and simulation represent a fast and powerful way to predict bone mechanical properties based on experimental information on bone microstructure as obtained through X-ray-based methods. However, technical limitations of experimental devices used to inspect bone microstructure may produce blurry data, especially in in vivo conditions. Uncertainties affecting the experimental data (input) may question the reliability of the results predicted by the model (output). Since input data are uncertain, deterministic approaches are limited and new modelling paradigms are required. In this paper, a novel stochastic multiscale model is developed to estimate the elastic properties of bone while taking into account uncertainties on bone composition. Effective elastic properties of cortical bone tissue were computed using a multiscale model based on continuum micromechanics. Volume fractions of bone components (collagen, mineral, and water) were considered as random variables whose probabilistic description was built using the maximum entropy principle. The relevance of this approach was proved by analysing a human bone sample taken from the inferior femoral neck. The sample was imaged using synchrotron radiation micro-computed tomography. 3-D distributions of Haversian porosity and tissue mineral density extracted from these images supplied the experimental information needed to build the stochastic models of the volume fractions. Thus, the stochastic multiscale model provided reliable statistical information (such as mean values and confidence intervals) on bone elastic properties at the tissue scale. Moreover, the existence of a simpler "nominal model", accounting for the main features of the stochastic model, was investigated. It was shown that such a model does exist, and its relevance
NASA Astrophysics Data System (ADS)
Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra
2004-08-01
In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.
Zhang, Jinjing; Zhang, Tao
2015-02-15
The parameter-induced stochastic resonance based on spectral entropy (PSRSE) method is introduced for the detection of a very weak signal in the presence of strong noise. The effect of stochastic resonance on the detection is optimized using parameters obtained in spectral entropy analysis. Upon processing employing the PSRSE method, the amplitude of the weak signal is enhanced and the noise power is reduced, so that the frequency of the signal can be estimated with greater precision through spectral analysis. While the improvement in the signal-to-noise ratio is similar to that obtained using the Duffing oscillator algorithm, the computational cost reduces from O(N{sup 2}) to O(N). The PSRSE approach is applied to the frequency measurement of a weak signal made by a vortex flow meter. The results are compared with those obtained applying the Duffing oscillator algorithm.
NASA Astrophysics Data System (ADS)
Chang, Ching-Min; Yeh, Hund-Der
2009-01-01
This paper describes a stochastic analysis of steady state flow in a bounded, partially saturated heterogeneous porous medium subject to distributed infiltration. The presence of boundary conditions leads to non-uniformity in the mean unsaturated flow, which in turn causes non-stationarity in the statistics of velocity fields. Motivated by this, our aim is to investigate the impact of boundary conditions on the behavior of field-scale unsaturated flow. Within the framework of spectral theory based on Fourier-Stieltjes representations for the perturbed quantities, the general expressions for the pressure head variance, variance of log unsaturated hydraulic conductivity and variance of the specific discharge are presented in the wave number domain. Closed-form expressions are developed for the simplified case of statistical isotropy of the log hydraulic conductivity field with a constant soil pore-size distribution parameter. These expressions allow us to investigate the impact of the boundary conditions, namely the vertical infiltration from the soil surface and a prescribed pressure head at a certain depth below the soil surface. It is found that the boundary conditions are critical in predicting uncertainty in bounded unsaturated flow. Our analytical expression for the pressure head variance in a one-dimensional, heterogeneous flow domain, developed using a nonstationary spectral representation approach [Li S-G, McLaughlin D. A nonstationary spectral method for solving stochastic groundwater problems: unconditional analysis. Water Resour Res 1991;27(7):1589-605; Li S-G, McLaughlin D. Using the nonstationary spectral method to analyze flow through heterogeneous trending media. Water Resour Res 1995; 31(3):541-51], is precisely equivalent to the published result of Lu et al. [Lu Z, Zhang D. Analytical solutions to steady state unsaturated flow in layered, randomly heterogeneous soils via Kirchhoff transformation. Adv Water Resour 2004;27:775-84].
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Image-based histologic grade estimation using stochastic geometry analysis
NASA Astrophysics Data System (ADS)
Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.
2011-03-01
Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.
An LMI approach to discrete-time observer design with stochastic resilience
NASA Astrophysics Data System (ADS)
Yaz, Edwin Engin; Jeong, Chung Seop; Yaz, Yvonne Ilke
2006-04-01
Much of the recent work on robust control or observer design has focused on preservation of stability of the controlled system or the convergence of the observer in the presence of parameter perturbations in the plant or the measurement model. The present work addresses the important problem of stochastic resilience or non-fragility of a discrete-time Luenberger observer which is the maintenance of convergence and/or performance when the observer is erroneously implemented possibly due to computational errors i.e. round off errors in digital implementation or sensor errors, etc. A common linear matrix inequality framework is presented to address the stochastic resilient design problem for various performance criteria in the implementation based on the knowledge of an upper bound on the variance of the random error in the observer gain. Present results are compared to earlier designs for stochastic robustness. Illustrative examples are given to complement the theoretical results.
TORABIPOUR, Amin; ZERAATI, Hojjat; ARAB, Mohammad; RASHIDIAN, Arash; AKBARI SARI, Ali; SARZAIEM, Mahmuod Reza
2016-01-01
Background: To determine the hospital required beds using stochastic simulation approach in cardiac surgery departments. Methods: This study was performed from Mar 2011 to Jul 2012 in three phases: First, collection data from 649 patients in cardiac surgery departments of two large teaching hospitals (in Tehran, Iran). Second, statistical analysis and formulate a multivariate linier regression model to determine factors that affect patient's length of stay. Third, develop a stochastic simulation system (from admission to discharge) based on key parameters to estimate required bed capacity. Results: Current cardiac surgery department with 33 beds can only admit patients in 90.7% of days. (4535 d) and will be required to over the 33 beds only in 9.3% of days (efficient cut off point). According to simulation method, studied cardiac surgery department will requires 41–52 beds for admission of all patients in the 12 next years. Finally, one-day reduction of length of stay lead to decrease need for two hospital beds annually. Conclusion: Variation of length of stay and its affecting factors can affect required beds. Statistic and stochastic simulation model are applied and useful methods to estimate and manage hospital beds based on key hospital parameters. PMID:27957466
Stochastic structural and reliability based optimization of tuned mass damper
NASA Astrophysics Data System (ADS)
Mrabet, E.; Guedri, M.; Ichchou, M. N.; Ghanmi, S.
2015-08-01
The purpose of the current work is to present and discuss a technique for optimizing the parameters of a vibration absorber in the presence of uncertain bounded structural parameters. The technique used in the optimization is an interval extension based on a Taylor expansion of the objective function. The technique permits the transformation of the problem, initially non-deterministic, into two independents deterministic sub-problems. Two optimization strategies are considered: the Stochastic Structural Optimization (SSO) and the Reliability Based Optimization (RBO). It has been demonstrated through two different structures that the technique is valid for the SSO problem, even for high levels of uncertainties and it is less suitable for the RBO problem, especially when considering high levels of uncertainties.
HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee
2012-01-01
Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The
Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations
NASA Astrophysics Data System (ADS)
Junaid, Ali Khan; Muhammad, Asif Zahoor Raja; Ijaz Mansoor, Qureshi
2011-02-01
We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.
A Stochastic Approach For Extending The Dimensionality Of Observed Datasets
NASA Technical Reports Server (NTRS)
Varnai, Tamas
2002-01-01
This paper addresses the problem that in many cases, observations cannot provide complete fields of the measured quantities, because they yield data only along a single cross-section through the examined fields. The paper describes a new Fourier-adjustment technique that allows existing fractal models to build realistic surroundings to the measured cross-sections. This new approach allows more representative calculations of cloud radiative processes and may be used in other areas as well.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
On the Performance of Stochastic Model-Based Image Segmentation
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Sewchand, Wilfred
1989-11-01
A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].
Stochastic Approach to Phonon-Assisted Optical Absorption
NASA Astrophysics Data System (ADS)
Zacharias, Marios; Patrick, Christopher E.; Giustino, Feliciano
2015-10-01
We develop a first-principles theory of phonon-assisted optical absorption in semiconductors and insulators which incorporates the temperature dependence of the electronic structure. We show that the Hall-Bardeen-Blatt theory of indirect optical absorption and the Allen-Heine theory of temperature-dependent band structures can be derived from the present formalism by retaining only one-phonon processes. We demonstrate this method by calculating the optical absorption coefficient of silicon using an importance sampling Monte Carlo scheme, and we obtain temperature-dependent line shapes and band gaps in good agreement with experiment. The present approach opens the way to predictive calculations of the optical properties of solids at finite temperature.
Pacini, Simone
2014-01-01
Mesenchymal stromal cells (MSCs) have enormous intrinsic clinical value due to their multi-lineage differentiation capacity, support of hemopoiesis, immunoregulation and growth factors/cytokines secretion. MSCs have thus been the object of extensive research for decades. After completion of many pre-clinical and clinical trials, MSC-based therapy is now facing a challenging phase. Several clinical trials have reported moderate, non-durable benefits, which caused initial enthusiasm to wane, and indicated an urgent need to optimize the efficacy of therapeutic, platform-enhancing MSC-based treatment. Recent investigations suggest the presence of multiple in vivo MSC ancestors in a wide range of tissues, which contribute to the heterogeneity of the starting material for the expansion of MSCs. This variability in the MSC culture-initiating cell population, together with the different types of enrichment/isolation and cultivation protocols applied, are hampering progress in the definition of MSC-based therapies. International regulatory statements require a precise risk/benefit analysis, ensuring the safety and efficacy of treatments. GMP validation allows for quality certification, but the prediction of a clinical outcome after MSC-based therapy is correlated not only to the possible morbidity derived by cell production process, but also to the biology of the MSCs themselves, which is highly sensible to unpredictable fluctuation of isolating and culture conditions. Risk exposure and efficacy of MSC-based therapies should be evaluated by pre-clinical studies, but the batch-to-batch variability of the final medicinal product could significantly limit the predictability of these studies. The future success of MSC-based therapies could lie not only in rational optimization of therapeutic strategies, but also in a stochastic approach during the assessment of benefit and risk factors. PMID:25364757
A wavelet-based computational method for solving stochastic Itô–Volterra integral equations
Mohammadi, Fakhrodin
2015-10-01
This paper presents a computational method based on the Chebyshev wavelets for solving stochastic Itô–Volterra integral equations. First, a stochastic operational matrix for the Chebyshev wavelets is presented and a general procedure for forming this matrix is given. Then, the Chebyshev wavelets basis along with this stochastic operational matrix are applied for solving stochastic Itô–Volterra integral equations. Convergence and error analysis of the Chebyshev wavelets basis are investigated. To reveal the accuracy and efficiency of the proposed method some numerical examples are included.
A Likelihood Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models
Zimmer, Christoph; Cohen, Ted
2017-01-01
Stochastic transmission dynamic models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, methods for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed method, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark methods: 1) a likelihood approximation with an assumption of independent Poisson observations; 2) a particle filtering method; and 3) an ensemble Kalman filter method. We find that MSS significantly outperforms each of these three benchmark methods in the majority of epidemic scenarios tested. In summary, MSS is a promising method that may improve on current approaches for calibration and prediction using stochastic models of epidemics. PMID:28095403
A Likelihood Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models.
Zimmer, Christoph; Yaesoubi, Reza; Cohen, Ted
2017-01-01
Stochastic transmission dynamic models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, methods for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed method, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark methods: 1) a likelihood approximation with an assumption of independent Poisson observations; 2) a particle filtering method; and 3) an ensemble Kalman filter method. We find that MSS significantly outperforms each of these three benchmark methods in the majority of epidemic scenarios tested. In summary, MSS is a promising method that may improve on current approaches for calibration and prediction using stochastic models of epidemics.
Runoff modelling using radar data and flow measurements in a stochastic state space approach.
Krämer, S; Grum, M; Verworn, H R; Redder, A
2005-01-01
In urban drainage the estimation of runoff with the help of models is a complex task. This is in part due to the fact that rainfall, the most important input to urban drainage modelling, is highly uncertain. Added to the uncertainty of rainfall is the complexity of performing accurate flow measurements. In terms of deterministic modelling techniques these are needed for calibration and evaluation of the applied model. Therefore, the uncertainties of rainfall and flow measurements have a severe impact on the model parameters and results. To overcome these problems a new methodology has been developed which is based on simple rain plane and runoff models that are incorporated into a stochastic state space model approach. The state estimation is done by using the extended Kalman filter in combination with a maximum likelihood criterion and an off-line optimization routine. This paper presents the results of this new methodology with respect to the combined consideration of uncertainties in distributed rainfall derived from radar data and uncertainties in measured flows in an urban catchment within the Emscher river basin, Germany.
Kolevatov, R. S.; Boreskov, K. G.
2013-04-15
We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
On a stochastic approach to a code performance estimation
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.
2016-06-01
The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.
Efficient entropy estimation based on doubly stochastic models for quantized wavelet image data.
Gaubatz, Matthew D; Hemami, Sheila S
2007-04-01
Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders.
NASA Astrophysics Data System (ADS)
Kapranov, Sergey V.; Kouzaev, Guennadi A.
2013-06-01
The motion of a dipole in external electric fields is considered in the framework of nonlinear pendulum dynamics. A stochastic layer is formed near the separatrix of the dipole pendulum in a restoring static electric field under the periodic perturbation by plane-polarized electric fields. The width of the stochastic layer depends on the direction of the forcing field variation, and this width can be evaluated as a function of perturbation frequency, amplitude, and duration. A numerical simulation of the approximate stochastic layer width of a perturbed pendulum yields a multi-peak frequency spectrum. It is described well enough at high perturbation amplitudes by an analytical estimation based on the separatrix map with an introduced expression of the most effective perturbation phase. The difference in the fractal dimensions of the phase spaces calculated geometrically and using the time-delay reconstruction is attributed to the predominant development of periodic and chaotic orbits, respectively. The correlation of the stochastic layer width with the phase space fractal dimensions is discussed.
Figueredo, Grazziela P; Siebers, Peer-Olaf; Owen, Markus R; Reps, Jenna; Aickelin, Uwe
2014-01-01
There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm.
Stochastic switching in slow-fast systems: a large-fluctuation approach.
Heckman, Christoffer R; Schwartz, Ira B
2014-02-01
In this paper we develop a perturbation method to predict the rate of occurrence of rare events for singularly perturbed stochastic systems using a probability density function approach. In contrast to a stochastic normal form approach, we model rare event occurrences due to large fluctuations probabilistically and employ a WKB ansatz to approximate their rate of occurrence. This results in the generation of a two-point boundary value problem that models the interaction of the state variables and the most likely noise force required to induce a rare event. The resulting equations of motion of describing the phenomenon are shown to be singularly perturbed. Vastly different time scales among the variables are leveraged to reduce the dimension and predict the dynamics on the slow manifold in a deterministic setting. The resulting constrained equations of motion may be used to directly compute an exponent that determines the probability of rare events. To verify the theory, a stochastic damped Duffing oscillator with three equilibrium points (two sinks separated by a saddle) is analyzed. The predicted switching time between states is computed using the optimal path that resides in an expanded phase space. We show that the exponential scaling of the switching rate as a function of system parameters agrees well with numerical simulations. Moreover, the dynamics of the original system and the reduced system via center manifolds are shown to agree in an exponentially scaling sense.
Relative frequencies of constrained events in stochastic processes: An analytical approach
NASA Astrophysics Data System (ADS)
Rusconi, S.; Akhmatskaya, E.; Sokolovski, D.; Ballard, N.; de la Cal, J. C.
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈104 ). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.
Relative frequencies of constrained events in stochastic processes: An analytical approach.
Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.
Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches
NASA Astrophysics Data System (ADS)
Egging, Rudolf Gerardus
This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in
Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.
2009-01-01
Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.
NASA Astrophysics Data System (ADS)
Maiti, Sumit Kumar; Roy, Sankar Kumar
2016-05-01
In this paper, a Multi-Choice Stochastic Bi-Level Programming Problem (MCSBLPP) is considered where all the parameters of constraints are followed by normal distribution. The cost coefficients of the objective functions are multi-choice types. At first, all the probabilistic constraints are transformed into deterministic constraints using stochastic programming approach. Further, a general transformation technique with the help of binary variables is used to transform the multi-choice type cost coefficients of the objective functions of Decision Makers(DMs). Then the transformed problem is considered as a deterministic multi-choice bi-level programming problem. Finally, a numerical example is presented to illustrate the usefulness of the paper.
Non-perturbative approach for curvature perturbations in stochastic δ N formalism
Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro E-mail: kawasaki@icrr.u-tokyo.ac.jp
2014-10-01
In our previous paper [1], we have proposed a new algorithm to calculate the power spectrum of the curvature perturbations generated in inflationary universe with use of the stochastic approach. Since this algorithm does not need the perturbative expansion with respect to the inflaton fields on super-horizon scale, it works even in highly stochastic cases. For example, when the curvature perturbations are very large or the non-Gaussianities of the curvature perturbations are sizable, the perturbative expansion may break down but our algorithm enables to calculate the curvature perturbations. We apply it to two well-known inflation models, chaotic and hybrid inflation, in this paper. Especially for hybrid inflation, while the potential is very flat around the critical point and the standard perturbative computation is problematic, we successfully calculate the curvature perturbations.
Scott, Bobby, R., Ph.D.
2003-06-27
OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on
NASA Astrophysics Data System (ADS)
Subagadis, Y. H.; Schütze, N.; Grundmann, J.
2014-09-01
The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
A Q-Learning Approach to Flocking With UAVs in a Stochastic Environment.
Hung, Shao-Ming; Givigi, Sidney N
2017-01-01
In the past two decades, unmanned aerial vehicles (UAVs) have demonstrated their efficacy in supporting both military and civilian applications, where tasks can be dull, dirty, dangerous, or simply too costly with conventional methods. Many of the applications contain tasks that can be executed in parallel, hence the natural progression is to deploy multiple UAVs working together as a force multiplier. However, to do so requires autonomous coordination among the UAVs, similar to swarming behaviors seen in animals and insects. This paper looks at flocking with small fixed-wing UAVs in the context of a model-free reinforcement learning problem. In particular, Peng's Q(λ) with a variable learning rate is employed by the followers to learn a control policy that facilitates flocking in a leader-follower topology. The problem is structured as a Markov decision process, where the agents are modeled as small fixed-wing UAVs that experience stochasticity due to disturbances such as winds and control noises, as well as weight and balance issues. Learned policies are compared to ones solved using stochastic optimal control (i.e., dynamic programming) by evaluating the average cost incurred during flight according to a cost function. Simulation results demonstrate the feasibility of the proposed learning approach at enabling agents to learn how to flock in a leader-follower topology, while operating in a nonstationary stochastic environment.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
NASA Astrophysics Data System (ADS)
Kim, Y.; Katz, R. W.; Rajagopalan, B.; Podesta, G. P.
2009-12-01
Climate forecasts and climate change scenarios are typically provided in the form of monthly or seasonally aggregated totals or means. But time series of daily weather (e.g., precipitation amount, minimum and maximum temperature) are commonly required for use in agricultural decision-making. Stochastic weather generators constitute one technique to temporally downscale such climate information. The recently introduced approach for stochastic weather generators, based generalized linear modeling (GLM), is convenient for this purpose, especially with covariates to account for seasonality and teleconnections (e.g., with the El Niño phenomenon). Yet one important limitation of stochastic weather generators is a marked tendency to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this “overdispersion” phenomenon, we incorporate time series of seasonal total precipitation and seasonal mean minimum and maximum temperature in the GLM weather generator as covariates. These seasonal time series are smoothed using locally weighted scatterplot smoothing (LOESS) to avoid introducing underdispersion. Because the aggregate variables appear explicitly in the weather generator, downscaling to daily sequences can be readily implemented. The proposed method is applied to time series of daily weather at Pergamino and Pilar in the Argentine Pampas. Seasonal precipitation and temperature forecasts produced by the International Research Institute for Climate and Society (IRI) are used as prototypes. In conjunction with the GLM weather generator, a resampling scheme is used to translate the uncertainty in the seasonal forecasts (the IRI format only specifies probabilities for three categories: below normal, near normal, and above normal) into the corresponding uncertainty for the daily weather statistics. The method is able to generate potentially useful shifts in the probability distributions of seasonally aggregated precipitation and
NASA Astrophysics Data System (ADS)
Lai, Zhi-hui; Leng, Yong-gang
2016-12-01
Stochastic resonance (SR) is an important approach to detect weak vibration signals from heavy background noise and further realize mechanical incipient fault diagnosis. The stochastic resonance of a bistable Duffing oscillator is limited by strict small-parameter conditions, i.e., SR can only take place under small values of signal parameters (signal amplitude, frequency, and noise intensity). We propose a method to treat the large-parameter SR for this oscillator. The linear amplitude-transformed, time/frequency scale-transformed, and parameter-adjusted methods are presented and used to produce SR for signals with large-amplitude, large-frequency and/or large-intensity noise. Furthermore, we propose the weak-signal detection approach based on large-parameter SR in the oscillator. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in incipient fault diagnosis.
Stochastic Complexity Based Estimation of Missing Elements in Questionnaire Data.
ERIC Educational Resources Information Center
Tirri, Henry; Silander, Tomi
A new information-theoretically justified approach to missing data estimation for multivariate categorical data was studied. The approach is a model-based imputation procedure relative to a model class (i.e., a functional form for the probability distribution of the complete data matrix), which in this case is the set of multinomial models with…
Stochastic boundary approaches to many-particle systems coupled to a particle reservoir
NASA Astrophysics Data System (ADS)
Taniguchi, Tooru; Sawada, Shin-ichi
2017-01-01
Stochastic boundary conditions for interactions with a particle reservoir are discussed in many-particle systems. We introduce the boundary conditions with the injection rate and the momentum distribution of particles coming from a particle reservoir in terms of the pressure and the temperature of the reservoir. It is shown that equilibrium ideal gases and hard-disk systems with these boundary conditions reproduce statistical-mechanical properties based on the corresponding grand canonical distributions. We also apply the stochastic boundary conditions to a hard-disk model with a steady particle current escaping from a particle reservoir in an open tube, and discuss its nonequilibrium properties such as a chemical potential dependence of the current and deviations from the local equilibrium hypothesis.
A probabilistic graphical model approach to stochastic multiscale partial differential equations
Wan, Jiang; Zabaras, Nicholas
2013-10-01
We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.
Dynamic response of mechanical systems to impulse process stochastic excitations: Markov approach
NASA Astrophysics Data System (ADS)
Iwankiewicz, R.
2016-05-01
Methods for determination of the response of mechanical dynamic systems to Poisson and non-Poisson impulse process stochastic excitations are presented. Stochastic differential and integro-differential equations of motion are introduced. For systems driven by Poisson impulse process the tools of the theory of non-diffusive Markov processes are used. These are: the generalized Itô’s differential rule which allows to derive the differential equations for response moments and the forward integro-differential Chapman-Kolmogorov equation from which the equation governing the probability density of the response is obtained. The relation of Poisson impulse process problems to the theory of diffusive Markov processes is given. For systems driven by a class of non-Poisson (Erlang renewal) impulse processes an exact conversion of the original non-Markov problem into a Markov one is based on the appended Markov chain corresponding to the introduced auxiliary pure jump stochastic process. The derivation of the set of integro-differential equations for response probability density and also a moment equations technique are based on the forward integro-differential Chapman-Kolmogorov equation. An illustrating numerical example is also included.
Stochastic resonance-enhanced laser-based particle detector.
Dutta, A; Werner, C
2009-01-01
This paper presents a Laser-based particle detector whose response was enhanced by modulating the Laser diode with a white-noise generator. A Laser sheet was generated to cast a shadow of the object on a 200 dots per inch, 512 x 1 pixels linear sensor array. The Laser diode was modulated with a white-noise generator to achieve stochastic resonance. The white-noise generator essentially amplified the wide-bandwidth (several hundred MHz) noise produced by a reverse-biased zener diode operating in junction-breakdown mode. The gain in the amplifier in the white-noise generator was set such that the Receiver Operating Characteristics plot provided the best discriminability. A monofiber 40 AWG (approximately 80 microm) wire was detected with approximately 88% True Positive rate and approximately 19% False Positive rate in presence of white-noise modulation and with approximately 71% True Positive rate and approximately 15% False Positive rate in absence of white-noise modulation.
Atzori, A S; Tedeschi, L O; Cannas, A
2013-05-01
The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21
Karagiannis, Georgios Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
Heimeshoff, Mareike; Schreyögg, Jonas; Kwietniewski, Lukas
2014-06-01
This is the first study to use stochastic frontier analysis to estimate both the technical and cost efficiency of physician practices. The analysis is based on panel data from 3,126 physician practices for the years 2006 through 2008. We specified the technical and cost frontiers as translog function, using the one-step approach of Battese and Coelli to detect factors that influence the efficiency of general practitioners and specialists. Variables that were not analyzed previously in this context (e.g., the degree of practice specialization) and a range of control variables such as a patients' case-mix were included in the estimation. Our results suggest that it is important to investigate both technical and cost efficiency, as results may depend on the type of efficiency analyzed. For example, the technical efficiency of group practices was significantly higher than that of solo practices, whereas the results for cost efficiency differed. This may be due to indivisibilities in expensive technical equipment, which can lead to different types of health care services being provided by different practice types (i.e., with group practices using more expensive inputs, leading to higher costs per case despite these practices being technically more efficient). Other practice characteristics such as participation in disease management programs show the same impact throughout both cost and technical efficiency: participation in disease management programs led to an increase in both, technical and cost efficiency, and may also have had positive effects on the quality of care. Future studies should take quality-related issues into account.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.
Paschalidis, Ioannis Ch; Shen, Yang; Vakili, Pirooz; Vajda, Sandor
2007-04-01
This paper introduces a new stochastic global optimization method targeting protein-protein docking problems, an important class of problems in computational structural biology. The method is based on finding general convex quadratic underestimators to the binding energy function that is funnel-like. Finding the optimum underestimator requires solving a semidefinite programming problem, hence the name semidefinite programming-based underestimation (SDU). The underestimator is used to bias sampling in the search region. It is established that under appropriate conditions SDU locates the global energy minimum with probability approaching one as the sample size grows. A detailed comparison of SDU with a related method of convex global underestimator (CGU), and computational results for protein-protein docking problems are provided.
Economic policy optimization based on both one stochastic model and the parametric control theory
NASA Astrophysics Data System (ADS)
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
A copula-based stochastic generator for coupled precipitation and evaporation time series
NASA Astrophysics Data System (ADS)
Verhoest, Niko; Vernieuwe, Hilde; Pham, Minh Tu; Willems, Patrick; De Baets, Bernard
2015-04-01
In hydrologic design, one can make use of stochastic rainfall time series as input to hydrological models in order to assess extreme statistics on e.g. discharge. However, precipitation is not the only important forcing variable, also evaporation is, requiring the need for evaporation time series together with precipitation time series as input to these rainfall-runoff models. Given the fact that precipitation and evaporation are correlated, one should thus provide an evaporation time series that is not in conflict with the stochastic rainfall time series. In this presentation, a framework is developed that allows for generating coupled precipitation and evaporation time series based on vine copulas. This framework requires (1) the stochastic modelling of a precipitation time series, for which a Bartlett-Lewis model is used, (2) the stochastic modelling of a daily temperature model, for which a vine copula is built based on dependencies between daily temperature, the daily total precipitation (obtained from the Bartlett-Lewis modelled time series) and the temperature of previous day, and (3) a stochastic evaporation model, based on a vine copula that makes use of precipitation statistics (from the Bartlett-Lewis modelled time series) and daily temperature (based on the stochastic temperature model). The models are calibrated based on 10-minute precipitation, daily temperature and daily evaporation records from a 72-year period are available at Uccle (Belgium). Based on ensemble statistics, the models are evaluated and uncertainty assessments are made.
Kryvohuz, Maksym Mukamel, Shaul
2015-06-07
Generalized nonlinear response theory is presented for stochastic dynamical systems. Experiments in which multiple measurements of dynamical quantities are used along with multiple perturbations of parameters of dynamical systems are described by generalized response functions (GRFs). These constitute a new type of multidimensional measures of stochastic dynamics either in the time or the frequency domains. Closed expressions for GRFs in stochastic dynamical systems are derived and compared with numerical non-equilibrium simulations. Several types of perturbations are considered: impulsive and periodic perturbations of temperature and impulsive perturbations of coordinates. The present approach can be used to study various types of stochastic processes ranging from single-molecule conformational dynamics to chemical kinetics of finite-size reactors such as biocells.
A stochastic approach for quantifying immigrant integration: the Spanish test case
NASA Astrophysics Data System (ADS)
Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia
2014-10-01
We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.
The Influence of Ecohydrologic Dynamics on Landscape Evolution: a Stochastic Approach
NASA Astrophysics Data System (ADS)
Deal, E.; Favre Pugin, A. C.; Botter, G.; Braun, J.
2015-12-01
The stream power incision model (SPIM) has a long history of use in modeling landscape evolution. Despite simplifications made in its formulation, it has emerged over the last 30 years as a powerful tool to interpret the histories of tectonically active landscapes and to understand how they evolve over millions of years. However, intense interest in the relationship between climate and erosion has revealed that the standard SPIM has some significant shortcomings. First, it fails to account for the role of erosion thresholds, which have been shown to be important and require an approach that addresses the variable or stochastic nature of erosion processes and drivers. Second, the standard SPIM does not address the influence of catchment hydrology, which modulates the incoming precipitation to produce discharge that in turn drives fluvial erosion. Hydrological processes alter in particular the frequency and magnitude of extreme events which are highly relevant for landscape erosion. To address these weaknesses we introduce a new analytical stochastic-threshold formulation of the stream power incision model that is driven by probabilistic hydrology. The hydrological model incorporates a stochastic description of soil moisture which takes into account the random nature of the rainfall forcing and the dynamics of the soil layer. The soil layer dynamics include infiltration and evapotranspiration which are both modelled as being dependent on the time varying soil moisture level (state dependent). The stochastic approach allows us to integrate these effects over long periods of time to understand their influence on the longterm average erosion rate without the need to explicitly model processes on the short timescales where they are relevant. Our model can therefore represent the role of soil properties (thickness, porosity) and vegetation (through evapotranspiration rates) in the longterm catchment-wide water balance, and in turn the longterm erosion rate. We identify
Stochastic Extended LQR for Optimization-based Motion Planning Under Uncertainty.
Sun, Wen; van den Berg, Jur; Alterovitz, Ron
2016-04-01
We introduce a novel optimization-based motion planner, Stochastic Extended LQR (SELQR), which computes a trajectory and associated linear control policy with the objective of minimizing the expected value of a user-defined cost function. SELQR applies to robotic systems that have stochastic non-linear dynamics with motion uncertainty modeled by Gaussian distributions that can be state- and control-dependent. In each iteration, SELQR uses a combination of forward and backward value iteration to estimate the cost-to-come and the cost-to-go for each state along a trajectory. SELQR then locally optimizes each state along the trajectory at each iteration to minimize the expected total cost, which results in smoothed states that are used for dynamics linearization and cost function quadratization. SELQR progressively improves the approximation of the expected total cost, resulting in higher quality plans. For applications with imperfect sensing, we extend SELQR to plan in the robot's belief space. We show that our iterative approach achieves fast and reliable convergence to high-quality plans in multiple simulated scenarios involving a car-like robot, a quadrotor, and a medical steerable needle performing a liver biopsy procedure.
Consentaneous Agent-Based and Stochastic Model of the Financial Markets
Gontis, Vygintas; Kononovicius, Aleksejus
2014-01-01
We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation. PMID:25029364
Consentaneous agent-based and stochastic model of the financial markets.
Gontis, Vygintas; Kononovicius, Aleksejus
2014-01-01
We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.
A deterministic-stochastic approach to compute the Boltzmann collision integral in O(MN) operations
NASA Astrophysics Data System (ADS)
Alekseenko, Alexander; Nguyen, Truong; Wood, Aihua
2016-11-01
We developed and implemented a numerical algorithm for evaluating the Boltzmann collision operator with O(MN) operations, where N is the number of the discrete velocity points and M < N. The approach is formulated using a bilinear convolution form of the Galerkin projection of the collision operator and discontinuous Galerkin (DG) discretizations of the collision operator. Key ingredients of the new approach are singular value decomposition (SVD) compression of the collision kernel and approximations of the solution by a sum of Maxwellian streams using a stochastic likelihood maximization algorithm. The developed method is significantly faster than the full deterministic DG velocity discretization of the collision integral. Accuracy of the method is established on solutions to the problem of spatially homogeneous relaxation.
Doubrawa, Paula; Barthelmie, Rebecca J.; Wang, Hui; Churchfield, Matthew J.
2016-08-04
Understanding the detailed dynamics of wind turbine wakes is critical to predicting the performance and maximizing the efficiency of wind farms. This knowledge requires atmospheric data at a high spatial and temporal resolution, which are not easily obtained from direct measurements. Therefore, research is often based on numerical models, which vary in fidelity and computational cost. The simplest models produce axisymmetric wakes and are only valid beyond the near wake. Higher-fidelity results can be obtained by solving the filtered Navier-Stokes equations at a resolution that is sufficient to resolve the relevant turbulence scales. This work addresses the gap between these two extremes by proposing a stochastic model that produces an unsteady asymmetric wake. The model is developed based on a large-eddy simulation (LES) of an offshore wind farm. Because there are several ways of characterizing wakes, the first part of this work explores different approaches to defining global wake characteristics. From these, a model is developed that captures essential features of a LES-generated wake at a small fraction of the cost. The synthetic wake successfully reproduces the mean characteristics of the original LES wake, including its area and stretching patterns, and statistics of the mean azimuthal radius. The mean and standard deviation of the wake width and height are also reproduced. This preliminary study focuses on reproducing the wake shape, while future work will incorporate velocity deficit and meandering, as well as different stability scenarios.
NASA Astrophysics Data System (ADS)
Nataraju, Madhura; Johnson, Timothy J.; Adams, Douglas E.
2003-07-01
Environmental and operational variability due to changes in the excitation or any other variable can mimic or altogether obscure evidence of structural defects in measured data leading to false positive/negative diagnoses of damage and conservative/tolerant predictions of remaining useful life in structural health monitoring system. Diagnostic and prognostic errors like these in many types of commercial and defense-related applications must be eliminated if health monitoring is to be widely implemented in these applications. A theoretical framework of "dynamic similiarity" in which two sets of mathematical operators are utilized in one system/data model to distinguish damage from nonlinear, time-varying and stochastic events in the measured data is discussed in this paper. Because structural damage initiation, evolution and accumulation are nonlinear processes, the challenge here is to distinguish damage from nonlinear, time-varying and stochastic events in the measured data is discussed in this paper. Because structural damage initiation, evolution and accumulation are nonlinear processes, the challenge here is to distinguish abnormal from normal nonlinear dynamics, which are accentuated by physically or statistically non-stationary events in the operating environment. After discussing several examples of structural diagnosis and prognosis involving dynamic similarity, a simplifeid numerical finite element model of a helicopter blade with time-varying flexural stiffness on a nonlinear aerodynamic elastic foundation that is subjected to a stochastic base excitation is utilized to introduce and examine the effects of dynamic similarity on health monitoring systems. It is shown that environmental variability can be distinguished from structural damage using a physics-based model in conjunction with the dynamic similarity operators to develop more robust damage detection algorithms, which may prove to be more accurate and precise when operating conditions fluctuate.
Fixation of Cs to marine sediments estimated by a stochastic modelling approach.
Børretzen, Peer; Salbu, Brit
2002-01-01
irreversible sediment phase. while about 12.5 years are needed before 99.7% of the Cs ions are fixed. Thus, according to the model estimates the contact time between 137Cs ions leached from dumped waste and the Stepovogo Fjord sediment should be about 3 years before the sediment will act as an efficient permanent sink. Until then a significant fraction of 137Cs should be considered mobile. The stochastic modelling approach provides useful tools when assessing sediment-seawater interactions over time, and should be easily applicable to all sediment-seawater systems including a sink term.
NASA Astrophysics Data System (ADS)
Innocenti, Alessio; Marchioli, Cristian; Chibbaro, Sergio
2016-11-01
The Eulerian-Lagrangian approach based on Large-Eddy Simulation (LES) is one of the most promising and viable numerical tools to study particle-laden turbulent flows, when the computational cost of Direct Numerical Simulation (DNS) becomes too expensive. The applicability of this approach is however limited if the effects of the Sub-Grid Scales (SGSs) of the flow on particle dynamics are neglected. In this paper, we propose to take these effects into account by means of a Lagrangian stochastic SGS model for the equations of particle motion. The model extends to particle-laden flows the velocity-filtered density function method originally developed for reactive flows. The underlying filtered density function is simulated through a Lagrangian Monte Carlo procedure that solves a set of Stochastic Differential Equations (SDEs) along individual particle trajectories. The resulting model is tested for the reference case of turbulent channel flow, using a hybrid algorithm in which the fluid velocity field is provided by LES and then used to advance the SDEs in time. The model consistency is assessed in the limit of particles with zero inertia, when "duplicate fields" are available from both the Eulerian LES and the Lagrangian tracking. Tests with inertial particles were performed to examine the capability of the model to capture the particle preferential concentration and near-wall segregation. Upon comparison with DNS-based statistics, our results show improved accuracy and considerably reduced errors with respect to the case in which no SGS model is used in the equations of particle motion.
Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
NASA Astrophysics Data System (ADS)
Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.
2008-10-01
SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.
NASA Astrophysics Data System (ADS)
Sari, Mehmet; Ghasemi, Ebrahim; Ataei, Mohammad
2014-03-01
Backbreak is an undesirable side effect of bench blasting operations in open pit mines. A large number of parameters affect backbreak, including controllable parameters (such as blast design parameters and explosive characteristics) and uncontrollable parameters (such as rock and discontinuities properties). The complexity of the backbreak phenomenon and the uncertainty in terms of the impact of various parameters makes its prediction very difficult. The aim of this paper is to determine the suitability of the stochastic modeling approach for the prediction of backbreak and to assess the influence of controllable parameters on the phenomenon. To achieve this, a database containing actual measured backbreak occurrences and the major effective controllable parameters on backbreak (i.e., burden, spacing, stemming length, powder factor, and geometric stiffness ratio) was created from 175 blasting events in the Sungun copper mine, Iran. From this database, first, a new site-specific empirical equation for predicting backbreak was developed using multiple regression analysis. Then, the backbreak phenomenon was simulated by the Monte Carlo (MC) method. The results reveal that stochastic modeling is a good means of modeling and evaluating the effects of the variability of blasting parameters on backbreak. Thus, the developed model is suitable for practical use in the Sungun copper mine. Finally, a sensitivity analysis showed that stemming length is the most important parameter in controlling backbreak.
Li, Xiao Ji, Guanghua Zhang, Hui
2015-02-15
We use the stochastic Cahn–Hilliard equation to simulate the phase transitions of the macromolecular microsphere composite (MMC) hydrogels under a random disturbance. Based on the Flory–Huggins lattice model and the Boltzmann entropy theorem, we develop a reticular free energy suit for the network structure of MMC hydrogels. Taking the random factor into account, with the time-dependent Ginzburg-Landau (TDGL) mesoscopic simulation method, we set up a stochastic Cahn–Hilliard equation, designated herein as the MMC-TDGL equation. The stochastic term in the equation is constructed appropriately to satisfy the fluctuation-dissipation theorem and is discretized on a spatial grid for the simulation. A semi-implicit difference scheme is adopted to numerically solve the MMC-TDGL equation. Some numerical experiments are performed with different parameters. The results are consistent with the physical phenomenon, which verifies the good simulation of the stochastic term.
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic
Ground Movement Analysis Based on Stochastic Medium Theory
Fei, Meng; Li-chun, Wu; Jia-sheng, Zhang; Guo-dong, Deng; Zhi-hui, Ni
2014-01-01
In order to calculate the ground movement induced by displacement piles driven into horizontal layered strata, an axisymmetric model was built and then the vertical and horizontal ground movement functions were deduced using stochastic medium theory. Results show that the vertical ground movement obeys normal distribution function, while the horizontal ground movement is an exponential function. Utilizing field measured data, parameters of these functions can be obtained by back analysis, and an example was employed to verify this model. Result shows that stochastic medium theory is suitable for calculating the ground movement in pile driving, and there is no need to consider the constitutive model of soil or contact between pile and soil. This method is applicable in practice. PMID:24701184
A nonparametric stochastic optimizer for TDMA-based neuronal signaling.
Suzuki, Junichi; Phan, Dũng H; Budiman, Harry
2014-09-01
This paper considers neurons as a physical communication medium for intrabody networks of nano/micro-scale machines and formulates a noisy multiobjective optimization problem for a Time Division Multiple Access (TDMA) communication protocol atop the physical layer. The problem is to find the Pareto-optimal TDMA configurations that maximize communication performance (e.g., latency) by multiplexing a given neuronal network to parallelize signal transmissions while maximizing communication robustness (i.e., unlikeliness of signal interference) against noise in neuronal signaling. Using a nonparametric significance test, the proposed stochastic optimizer is designed to statistically determine the superior-inferior relationship between given two solution candidates and seek the optimal trade-offs among communication performance and robustness objectives. Simulation results show that the proposed optimizer efficiently obtains quality TDMA configurations in noisy environments and outperforms existing noise-aware stochastic optimizers.
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.
2009-12-01
Fractures and fracture networks are the principal pathways for transport of water and contaminants in groundwater systems, enhanced geothermal system fluids, migration of oil and gas, carbon dioxide leakage from carbon sequestration sites, and of radioactive and toxic industrial wastes from underground storage repositories. A major issue to overcome when characterizing a fractured reservoir is that of data limitation due to accessibility and affordability. Moreover, the ability to map discontinuities in the rock with available geological and geophysical tools tends to decrease particularly as the scale of the discontinuity goes down. Geological characterization data include measurements of fracture density, orientation, extent, and aperture, and are based on analysis of outcrops, borehole optical and acoustic televiewer logs, aerial photographs, and core samples, among other techniques. All of these measurements are taken at the field scale through a very sparse limited number of deep boreholes. These types of data are often reduced to probability distribution functions for predictive modeling and simulation in a stochastic framework such as a stochastic discrete fracture network. Stochastic discrete fracture network models enable, through Monte Carlo realizations and simulations, probabilistic assessment of flow and transport phenomena that are not adequately captured using continuum models. Despite the fundamental uncertainties inherited within the probabilistic reduction of the sparse data collected, very little work has been conducted on quantifying uncertainty on the reduced probabilistic distribution functions. In the current study, using nested Monte Carlo simulations, we present the impact of parameter uncertainties of the distribution functions of fracture density, orientation, aperture and size on the flow and transport using topological measures such as fracture connectivity, physical characteristics such as effective hydraulic conductivity tensors, and
Mustafa, G.
1989-01-01
This study presents a comprehensive physically based stochastic dynamic optimization model to assist planners in making decisions concerning mine soil depths and soil mixture ratios required to achieve successful revegetation of mine lands at different probability levels of success, subject to an uncertain weather regime. A perennial grass growth model was modified and validated for predicting vegetation growth in reclaimed mine soils. The plant growth model is based on continuous relationships between plant growth, air temperature, dry length, leaf area, photoperiod and plant-soil-moisture stresses. A plant available soil moisture model was adopted to estimate daily soil moisture for mine soils. A general probability model was developed to estimate the probability of successful revegetation in a 5-year bond release period. The probability model considers five possible bond release criteria in mine soil reclamation planning. A stochastic dynamic optimization model (SDOM) was developed to find the optimum combination of soil depth and soil mixture ratios that met the successful vegetation standard under non-irrigated conditions with weather as the only random element of the system. The SDOM was applied for Wise County, Virginia, and the model found that 2:1 sandstone/siltstone soil mixture required the minimum soil depth to achieve successful revegetation. These results were also supported by field data. The developed model allows the planners to better manage lands drastically disturbed by surface mining.
The impact of trade costs on rare earth exports : a stochastic frontier estimation approach.
Sanyal, Prabuddha; Brady, Patrick Vane; Vugrin, Eric D.
2013-09-01
The study develops a novel stochastic frontier modeling approach to the gravity equation for rare earth element (REE) trade between China and its trading partners between 2001 and 2009. The novelty lies in differentiating betweenbehind the border' trade costs by China and theimplicit beyond the border costs' of China's trading partners. Results indicate that the significance level of the independent variables change dramatically over the time period. While geographical distance matters for trade flows in both periods, the effect of income on trade flows is significantly attenuated, possibly capturing the negative effects of financial crises in the developed world. Second, the total export losses due tobehind the border' trade costs almost tripled over the time period. Finally, looking atimplicit beyond the border' trade costs, results show China gaining in some markets, although it is likely that some countries are substituting away from Chinese REE exports.
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew
2016-08-07
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of
NASA Astrophysics Data System (ADS)
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; Li, Bo; McCammon, J. Andrew
2016-08-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the "normal velocity" that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of
A stochastic context free grammar based framework for analysis of protein sequences
Dyrka, Witold; Nebel, Jean-Christophe
2009-01-01
Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free Grammar based framework has been
Hahl, Sayuri K.; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
Hahl, Sayuri K; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
Strategy based on information entropy for optimizing stochastic functions.
Schmidt, Tobias Christian; Ries, Harald; Spirkl, Wolfgang
2007-02-01
We propose a method for the global optimization of stochastic functions. During the course of the optimization, a probability distribution is built up for the location and the value of the global optimum. The concept of information entropy is used to make the optimization as efficient as possible. The entropy measures the information content of a probability distribution, and thus gives a criterion for decisions: From several possibilities we choose the one which yields the most information concerning location and value of the global maximum sought.
Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach
2012-09-30
COVERED - 4 . TITLE AND SUBTITLE Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach 5a. CONTRACT...mwheeler/maproom/RMM/ 4 • As the Madden-Julian oscillation (MJO) moves eastward from the Indian to the Pacific ocean, it typically accelerates, becomes
Stochastic volatility of the futures prices of emission allowances: A Bayesian approach
NASA Astrophysics Data System (ADS)
Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin
2017-01-01
Understanding the stochastic nature of the spot volatility of emission allowances is crucial for risk management in emissions markets. In this study, by adopting a stochastic volatility model with or without jumps to represent the dynamics of European Union Allowances (EUA) futures prices, we estimate the daily volatilities and model parameters by using the Markov Chain Monte Carlo method for stochastic volatility (SV), stochastic volatility with return jumps (SVJ) and stochastic volatility with correlated jumps (SVCJ) models. Our empirical results reveal three important features of emissions markets. First, the data presented herein suggest that EUA futures prices exhibit significant stochastic volatility. Second, the leverage effect is noticeable regardless of whether or not jumps are included. Third, the inclusion of jumps has a significant impact on the estimation of the volatility dynamics. Finally, the market becomes very volatile and large jumps occur at the beginning of a new phase. These findings are important for policy makers and regulators.
NASA Astrophysics Data System (ADS)
Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.
2008-07-01
We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.
Hu, Yan; Wen, Jing-Ya; Li, Xiao-Li; Wang, Da-Zhou; Li, Yu
2013-10-15
A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach was developed for contaminated sites management. The contaminant concentrations were simulated by a validated interval dynamic multimedia fugacity model, and different guideline values for the same contaminant were represented as a fuzzy environmental guideline. Then, the probability of violating environmental guideline (Pv) can be determined by comparison between the modeled concentrations and the fuzzy environmental guideline, and the constructed relationship between the Pvs and environmental risk levels was used to assess the environmental risk level. The developed approach was applied to assess the integrated environmental risk at a case study site in China, simulated from 1985 to 2020. Four scenarios were analyzed, including "residential land" and "industrial land" environmental guidelines under "strict" and "loose" strictness. It was found that PAH concentrations will increase steadily over time, with soil found to be the dominant sink. Source emission in soil was the leading input and atmospheric sedimentation was the dominant transfer process. The integrated environmental risks primarily resulted from petroleum spills and coke ovens, while the soil environmental risks came from coal combustion. The developed approach offers an effective tool for quantifying variability and uncertainty in the dynamic multimedia integrated environmental risk assessment and the contaminated site management.
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
Billari, Francesco C; Graziani, Rebecca; Melilli, Eugenio
2014-10-01
This article suggests a procedure to derive stochastic population forecasts adopting an expert-based approach. As in previous work by Billari et al. (2012), experts are required to provide evaluations, in the form of conditional and unconditional scenarios, on summary indicators of the demographic components determining the population evolution: that is, fertility, mortality, and migration. Here, two main purposes are pursued. First, the demographic components are allowed to have some kind of dependence. Second, as a result of the existence of a body of shared information, possible correlations among experts are taken into account. In both cases, the dependence structure is not imposed by the researcher but rather is indirectly derived through the scenarios elicited from the experts. To address these issues, the method is based on a mixture model, within the so-called Supra-Bayesian approach, according to which expert evaluations are treated as data. The derived posterior distribution for the demographic indicators of interest is used as forecasting distribution, and a Markov chain Monte Carlo algorithm is designed to approximate this posterior. This article provides the questionnaire designed by the authors to collect expert opinions. Finally, an application to the forecast of the Italian population from 2010 to 2065 is proposed.
Stochastic approach to correlations beyond the mean field with the Skyrme interaction
Fukuoka, Y.; Nakatsukasa, T.; Funaki, Y.; Yabana, K.
2012-10-20
Large-scale calculation based on the multi-configuration Skyrme density functional theory is performed for the light N=Z even-even nucleus, {sup 12}C. Stochastic procedures and the imaginary-time evolution are utilized to prepare many Slater determinants. Each state is projected on eigenstates of parity and angular momentum. Then, performing the configuration mixing calculation with the Skyrme Hamiltonian, we obtain low-lying energy-eigenstates and their explicit wave functions. The generated wave functions are completely free from any assumption and symmetry restriction. Excitation spectra and transition probabilities are well reproduced, not only for the ground-state band, but for negative-parity excited states and the Hoyle state.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi
Stochastic margin-based structure learning of Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael
2013-02-01
The margin criterion for parameter learning in graphical models gained significant impact over the last years. We use the maximum margin score for discriminatively optimizing the structure of Bayesian network classifiers. Furthermore, greedy hill-climbing and simulated annealing search heuristics are applied to determine the classifier structures. In the experiments, we demonstrate the advantages of maximum margin optimized Bayesian network structures in terms of classification performance compared to traditionally used discriminative structure learning methods. Stochastic simulated annealing requires less score evaluations than greedy heuristics. Additionally, we compare generative and discriminative parameter learning on both generatively and discriminatively structured Bayesian network classifiers. Margin-optimized Bayesian network classifiers achieve similar classification performance as support vector machines. Moreover, missing feature values during classification can be handled by discriminatively optimized Bayesian network classifiers, a case where purely discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Stochastic margin-based structure learning of Bayesian network classifiers
Pernkopf, Franz; Wohlmayr, Michael
2013-01-01
The margin criterion for parameter learning in graphical models gained significant impact over the last years. We use the maximum margin score for discriminatively optimizing the structure of Bayesian network classifiers. Furthermore, greedy hill-climbing and simulated annealing search heuristics are applied to determine the classifier structures. In the experiments, we demonstrate the advantages of maximum margin optimized Bayesian network structures in terms of classification performance compared to traditionally used discriminative structure learning methods. Stochastic simulated annealing requires less score evaluations than greedy heuristics. Additionally, we compare generative and discriminative parameter learning on both generatively and discriminatively structured Bayesian network classifiers. Margin-optimized Bayesian network classifiers achieve similar classification performance as support vector machines. Moreover, missing feature values during classification can be handled by discriminatively optimized Bayesian network classifiers, a case where purely discriminative classifiers usually require mechanisms to complete unknown feature values in the data first. PMID:24511159
Cherepanov, D A; Krishtalik, L I; Mulkidjanian, A Y
2001-01-01
Relaxation processes in proteins range in time from picoseconds to seconds. Correspondingly, biological electron transfer (ET) could be controlled by slow protein relaxation. We used the Langevin stochastic approach to describe this type of ET dynamics. Two different types of kinetic behavior were revealed, namely: oscillating ET (that could occur at picoseconds) and monotonically relaxing ET. On a longer time scale, the ET dynamics can include two different kinetic components. The faster one reflects the initial, nonadiabatic ET, whereas the slower one is governed by the medium relaxation. We derived a simple relation between the relative extents of these components, the change in the free energy (DeltaG), and the energy of the slow reorganization Lambda. The rate of ET was found to be determined by slow relaxation at -DeltaG < or = Lambda. The application of the developed approach to experimental data on ET in the bacterial photosynthetic reaction centers allowed a quantitative description of the oscillating features in the primary charge separation and yielded values of Lambda for the slower low-exothermic ET reactions. In all cases but one, the obtained estimates of Lambda varied in the range of 70-100 meV. Because the vast majority of the biological ET reactions are only slightly exothermic (DeltaG > or = -100 meV), the relaxationally controlled ET is likely to prevail in proteins. PMID:11222272
A new stochastic control approach to multireservoir operation problems with uncertain forecasts
NASA Astrophysics Data System (ADS)
Wang, Jinwen
2010-02-01
This paper presents a new stochastic control approach (NSCA) for determining the optimal weekly operation policy of multiple hydroplants. This originally involves solving an optimization problem at the beginning of each week to derive the optimal storage trajectory that maximizes the energy production during a study horizon plus the water value stored at the end of the study horizon. Then the derived optimal storage at the end of the upcoming week is used as the target to operate the reservoir. This paper describes the inflow as a forecast-dependent white noise and demonstrates that the optimal target storage at the end of the upcoming week can be equivalently determined by solving a real-time model. The real-time model derives the optimal storage trajectory that converges to the optimal annually cycling storage trajectory (OACST) at the end of a real-time horizon, with the OACST determined by solving an annually cycling model. The numerical examples with one, two, three, and seven reservoirs are studied in detail. For systems of no more than three reservoirs, the NSCA obtains results similar to those obtained with SDP even using a simple inflow forecasting model AR (1). A hypothetical numerical example with 21 reservoirs is also tested. The NSCA is conceptually superior to the other approaches for problems that are computationally intractable due to the number of reservoirs in the system.
Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O
2016-06-01
Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.
Ganscha, Stefan; Claassen, Manfred
2016-01-01
Stochastic chemical reaction networks constitute a model class to quantitatively describe dynamics and cell-to-cell variability in biological systems. The topology of these networks typically is only partially characterized due to experimental limitations. Current approaches for refining network topology are based on the explicit enumeration of alternative topologies and are therefore restricted to small problem instances with almost complete knowledge. We propose the reactionet lasso, a computational procedure that derives a stepwise sparse regression approach on the basis of the Chemical Master Equation, enabling large-scale structure learning for reaction networks by implicitly accounting for billions of topology variants. We have assessed the structure learning capabilities of the reactionet lasso on synthetic data for the complete TRAIL induced apoptosis signaling cascade comprising 70 reactions. We find that the reactionet lasso is able to efficiently recover the structure of these reaction systems, ab initio, with high sensitivity and specificity. With only < 1% false discoveries, the reactionet lasso is able to recover 45% of all true reactions ab initio among > 6000 possible reactions and over 102000 network topologies. In conjunction with information rich single cell technologies such as single cell RNA sequencing or mass cytometry, the reactionet lasso will enable large-scale structure learning, particularly in areas with partial network structure knowledge, such as cancer biology, and thereby enable the detection of pathological alterations of reaction networks. We provide software to allow for wide applicability of the reactionet lasso. PMID:27923064
Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.
Li, Xiao-Jian; Yang, Guang-Hong
2017-02-01
This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.
Stochastic investigation of two-dimensional cross sections of rocks based on the climacogram
NASA Astrophysics Data System (ADS)
Kalamioti, Anna; Dimitriadis, Panayiotis; Tzouka, Katerina; Lerias, Eleutherios; Koutsoyiannis, Demetris
2016-04-01
The statistical properties of soil and rock formations are essential for the characterization of the porous medium geological structure as well as for the prediction of its transport properties in groundwater modelling. We investigate two-dimensional cross sections of rocks in terms of stochastic structure of its morphology quantified by the climacogram (i.e., variance of the averaged process vs. scale). The analysis is based both in microscale and macroscale data, specifically from Scanning Electron Microscope (SEM) pictures and from field photos, respectively. We identify and quantify the stochastic properties with emphasis on the large scale type of decay (exponentially or power type, else known as Hurst-Kolmogorov behaviour). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
NASA Astrophysics Data System (ADS)
Menafoglio, A.; Guadagnini, A.; Secchi, P.
2016-08-01
We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.
Beam Based Measurements for Stochastic Cooling Systems at Fermilab
Lebedev, V.A.; Pasquinelli, R.J.; Werkema, S.J.; /Fermilab
2007-09-13
Improvement of antiproton stacking rates has been pursued for the last twenty years at Fermilab. The last twelve months have been dedicated to improving the computer model of the Stacktail system. The production of antiprotons encompasses the use of the entire accelerator chain with the exception of the Tevatron. In the Antiproton Source two storage rings, the Debuncher and Accumulator are responsible for the accumulation of antiprotons in quantities that can exceed 2 x 10{sup 12}, but more routinely, stacks of 5 x 10{sup 11} antiprotons are accumulated before being transferred to the Recycler ring. Since the beginning of this recent enterprise, peak accumulation rates have increased from 2 x 10{sup 11} to greater than 2.3 x 10{sup 11} antiprotons per hour. A goal of 3 x 10{sup 11} per hour has been established. Improvements to the stochastic cooling systems are but a part of this current effort. This paper will discuss Stacktail system measurements and experienced system limitations.
Stochastic quantum Zeno-based detection of noise correlations
Müller, Matthias M.; Gherardini, Stefano; Caruso, Filippo
2016-01-01
A system under constant observation is practically freezed to the measurement subspace. If the system driving is a random classical field, the survival probability of the system in the subspace becomes a random variable described by the Stochastic Quantum Zeno Dynamics (SQZD) formalism. Here, we study the time and ensemble average of this random survival probability and demonstrate how time correlations in the noisy environment determine whether the two averages do coincide or not. These environment time correlations can potentially generate non-Markovian dynamics of the quantum system depending on the structure and energy scale of the system Hamiltonian. We thus propose a way to detect time correlations of the environment by coupling a quantum probe system to it and observing the survival probability of the quantum probe in a measurement subspace. This will further contribute to the development of new schemes for quantum sensing technologies, where nanodevices may be exploited to image external structures or biological molecules via the surface field they generate. PMID:27941889
Stochastic quantum Zeno-based detection of noise correlations
NASA Astrophysics Data System (ADS)
Müller, Matthias M.; Gherardini, Stefano; Caruso, Filippo
2016-12-01
A system under constant observation is practically freezed to the measurement subspace. If the system driving is a random classical field, the survival probability of the system in the subspace becomes a random variable described by the Stochastic Quantum Zeno Dynamics (SQZD) formalism. Here, we study the time and ensemble average of this random survival probability and demonstrate how time correlations in the noisy environment determine whether the two averages do coincide or not. These environment time correlations can potentially generate non-Markovian dynamics of the quantum system depending on the structure and energy scale of the system Hamiltonian. We thus propose a way to detect time correlations of the environment by coupling a quantum probe system to it and observing the survival probability of the quantum probe in a measurement subspace. This will further contribute to the development of new schemes for quantum sensing technologies, where nanodevices may be exploited to image external structures or biological molecules via the surface field they generate.
NASA Astrophysics Data System (ADS)
Pandey, Ras B.
1998-03-01
A stochastic cellular automata (SCA) approach is introduced to study the growth and decay of cellular population in an immune response model relevant to HIV. Four cell types are considered: macrophages (M), helper cells (H), cytotoxic cells (C), and viral infected cells (V). Mobility of the cells is introduced and viral mutation is considered probabilistically. In absence of mutation, the population of the host cells, helper (N_H) and cytotxic (N_C) cells in particular, dominates over the viral population (N_V), i.e., N_H, NC > N_V, the immune system wins over the viral infection. Variation of cellular population with time exhibits oscillations. The amplitude of oscillations in variation of N_H, NC and NV with time decreases at high mobility even at low viral mutation; the rate of viral growth is nonmonotonic with NV > N_H, NC in the long time regime. The viral population is much higher than that of the host cells at higher mutation rate, a possible cause of AIDS.
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
NASA Astrophysics Data System (ADS)
Silveri, M.; Zalys-Geller, E.; Hatridge, M.; Leghtas, Z.; Devoret, M. H.; Girvin, S. M.
2015-03-01
In the remote entanglement process, two distant stationary qubits are entangled with separate flying qubits and the which-path information is erased from the flying qubits by interference effects. As a result, an observer cannot tell from which of the two sources a signal came and the probabilistic measurement process generates perfect heralded entanglement between the two signal sources. Notably, the two stationary qubits are spatially separated and there is no direct interaction between them. We study two transmon qubits in superconducting cavities connected to a Josephson Parametric Converter (JPC). The qubit information is encoded in the traveling wave leaking out from each cavity. Remarkably, the quantum-limited phase-preserving amplification of two traveling waves provided by the JPC can work as a which-path information eraser. By using a stochastic master approach we demonstrate the probabilistic production of heralded entangled states and that unequal qubit-cavity pairs can be made indistinguishable by simple engineering of driving fields. Additionally, we will derive measurement rates, measurement optimization strategies and discuss the effects of finite amplification gain, cavity losses, and qubit relaxations and dephasing. Work supported by IARPA, ARO and NSF.
Collignon, Bertrand; Séguret, Axel; Halloy, José
2016-01-01
Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173
Random-walk-based stochastic modeling of three-dimensional fiber systems.
Altendorf, Hellen; Jeulin, Dominique
2011-04-01
For the simulation of fiber systems, there exist several stochastic models: systems of straight nonoverlapping fibers, systems of overlapping bending fibers, or fiber systems created by sedimentation. However, there is a lack of models providing dense, nonoverlapping fiber systems with a given random orientation distribution and a controllable level of bending. We introduce a new stochastic model in this paper that generalizes the force-biased packing approach to fibers represented as chains of balls. The starting configuration is modeled using random walks, where two parameters in the multivariate von Mises-Fisher orientation distribution control the bending. The points of the random walk are associated with a radius and the current orientation. The resulting chains of balls are interpreted as fibers. The final fiber configuration is obtained as an equilibrium between repulsion forces avoiding crossing fibers and recover forces ensuring the fiber structure. This approach provides high volume fractions up to 72.0075%.
Comparison of stochastic parametrization approaches in a single-column model.
Ball, Michael A; Plant, Robert S
2008-07-28
We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single-column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared with deterministic ensembles describing initial condition uncertainty and also with combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.
High-order distance-based multiview stochastic learning in image classification.
Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng
2014-12-01
How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.
Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason
2017-03-21
Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a non-linear mixed effects model in which the random effects associated with absorption represent a Wiener process. The present work compares, 1) stochastic deconvolution, and 2) numerical deconvolution, using clinical pharmacokinetic data generated for an IVIVC study of extended release (ER) formulations of a BCS class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (Fabs) versus time profiles when supplied with exactly the same externally-determined unit impulse response parameters. In a separate analysis a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting Fabs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis.
Stochastic segmentation models for array-based comparative genomic hybridization data analysis.
Lai, Tze Leung; Xing, Haipeng; Zhang, Nancy
2008-04-01
Array-based comparative genomic hybridization (array-CGH) is a high throughput, high resolution technique for studying the genetics of cancer. Analysis of array-CGH data typically involves estimation of the underlying chromosome copy numbers from the log fluorescence ratios and segmenting the chromosome into regions with the same copy number at each location. We propose for the analysis of array-CGH data, a new stochastic segmentation model and an associated estimation procedure that has attractive statistical and computational properties. An important benefit of this Bayesian segmentation model is that it yields explicit formulas for posterior means, which can be used to estimate the signal directly without performing segmentation. Other quantities relating to the posterior distribution that are useful for providing confidence assessments of any given segmentation can also be estimated by using our method. We propose an approximation method whose computation time is linear in sequence length which makes our method practically applicable to the new higher density arrays. Simulation studies and applications to real array-CGH data illustrate the advantages of the proposed approach.
Quantum stochastic approach for molecule/surface scattering. I. Atom-phonon interactions
NASA Astrophysics Data System (ADS)
Bittner, Eric R.; Light, John C.
1993-11-01
We present a general, fully quantum mechanical theory for molecule surface scattering at finite temperature within the time dependent Hartree (TDH) factorization. We show the formal manipulations which reduce the total molecule-surface-bath Schrödinger equation into a form which is computationally convenient to use. Under the TDH factorization, the molecular portion of the wavefunction evolves according to a mean-field Hamiltonian which is dependent upon both time and temperature. The temporal and thermal dependence is due to stochastic and dissipative terms that appear in the Heisenberg equations of motion for the phonon operators upon averaging over the bath states. The resulting equations of motion are solved in one dimension self consistently using quantum wavepackets and the discrete variable representation. We compute energy transfer to the phonons as a function of surface temperature and initial energy and compare our results to results obtained using other mean-field models, namely an averaged mean-field model and a fully quantum model based upon a dissipative form of the quantum Liouville equation. It appears that the model presented here provides a better estimation of energy transfer between the molecule and the surface.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601
Hybrid approaches for multiple-species stochastic reaction-diffusion models.
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen
2015-10-15
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction-diffusion models
NASA Astrophysics Data System (ADS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2015-09-01
Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.
NASA Astrophysics Data System (ADS)
Liu, Jie; Sun, Xingsheng; Han, Xu; Jiang, Chao; Yu, Dejie
2015-05-01
Based on the Gegenbauer polynomial expansion theory and regularization method, an analytical method is proposed to identify dynamic loads acting on stochastic structures. Dynamic loads are expressed as functions of time and random parameters in time domain and the forward model of dynamic load identification is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions of system. Random parameters are approximated through the random variables with λ-probability density function (PDFs) or their derivative PDFs. For this kind of random variables, Gegenbauer polynomial expansion is the unique correct choice to transform the problem of load identification for a stochastic structure into its equivalent deterministic system. Just via its equivalent deterministic system, the load identification problem of a stochastic structure can be solved by any available deterministic methods. With measured responses containing noise, the improved regularization operator is adopted to overcome the ill-posedness of load reconstruction and to obtain the stable and approximate solutions of certain inverse problems and the valid assessments of the statistics of identified loads. Numerical simulations demonstrate that with regard to stochastic structures, the identification and assessment of dynamic loads are achieved steadily and effectively by the presented method.
NASA Astrophysics Data System (ADS)
Murphy, Shane; Scala, Antonio; Lorito, Stefano; Herrero, Andre; Festa, Gaetano; Nielsen, Stefan; Trasatti, Elisa; Tonini, Roberto; Romano, Fabrizio; Molinari, Irene
2016-04-01
Stochastic slip modelling based on general scaling features with uniform slip probability over the fault plane is commonly employed in tsunami and seismic hazard. However, dynamic rupture effects driven by specific fault geometry and frictional conditions can potentially control the slip probability. Unfortunately dynamic simulations can be computationally intensive, preventing their extensive use for hazard analysis. The aim of this study is to produce a computationally efficient stochastic model that incorporates slip features observed in dynamic simulations. Dynamic rupture simulations are performed along a transect representing an average along-depth profile on the Tohoku subduction interface. The surrounding media, effective normal stress and friction law are simplified. Uncertainty in the nucleation location and pre-stress distribution are accounted for by using randomly located nucleation patches and stochastic pre-stress distributions for 500 simulations. The 1D slip distributions are approximated as moment magnitudes on the fault plane based on empirical scaling laws with the ensemble producing a magnitude range of 7.8 - 9.6. To measure the systematic spatial slip variation and its dependence on earthquake magnitude we introduce the concept of the Slip Probability density Function (SPF). We find that while the stochastic SPF is magnitude invariant, the dynamically derived SPF is magnitude-dependent and shows pronounced slip amplification near the surface for M > 8.6 events. To incorporate these dynamic features in the stochastic source models, we sub-divide the dynamically derived SPFs into 0.2 magnitude bins and compare them with the stochastic SPF in order to generate a depth and magnitude dependent transfer function. Applying this function to the traditional stochastic slip distribution allows for an approximated but efficient incorporation of regionally specific dynamic features in a modified source model, to be used specifically when a significant
Desynchronization of stochastically synchronized chemical oscillators
Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan
2015-12-15
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.
1993-01-01
Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.
NASA Astrophysics Data System (ADS)
Kalpana, M.; Balasubramaniam, P.
2013-07-01
We investigate the stochastic asymptotical synchronization of chaotic Markovian jumping fuzzy cellular neural networks (MJFCNNs) with discrete, unbounded distributed delays, and the Wiener process based on sampled-data control using the linear matrix inequality (LMI) approach. The Lyapunov—Krasovskii functional combined with the input delay approach as well as the free-weighting matrix approach is employed to derive several sufficient criteria in terms of LMIs to ensure that the delayed MJFCNNs with the Wiener process is stochastic asymptotical synchronous. Restrictions (e.g., time derivative is smaller than one) are removed to obtain a proposed sampled-data controller. Finally, a numerical example is provided to demonstrate the reliability of the derived results.
Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.
2009-05-04
After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.
NASA Astrophysics Data System (ADS)
Stojković, Milan; Kostić, Srđan; Plavšić, Jasna; Prohaska, Stevan
2017-01-01
The authors present a detailed procedure for modelling of mean monthly flow time-series using records of the Great Morava River (Serbia). The proposed procedure overcomes a major challenge of other available methods by disaggregating the time series in order to capture the main properties of the hydrologic process in both long-run and short-run. The main assumption of the conducted research is that a time series of monthly flow rates represents a stochastic process comprised of deterministic, stochastic and random components, the former of which can be further decomposed into a composite trend and two periodic components (short-term or seasonal periodicity and long-term or multi-annual periodicity). In the present paper, the deterministic component of a monthly flow time-series is assessed by spectral analysis, whereas its stochastic component is modelled using cross-correlation transfer functions, artificial neural networks and polynomial regression. The results suggest that the deterministic component can be expressed solely as a function of time, whereas the stochastic component changes as a nonlinear function of climatic factors (rainfall and temperature). For the calibration period, the results of the analysis infers a lower value of Kling-Gupta Efficiency in the case of transfer functions (0.736), whereas artificial neural networks and polynomial regression suggest a significantly better match between the observed and simulated values, 0.841 and 0.891, respectively. It seems that transfer functions fail to capture high monthly flow rates, whereas the model based on polynomial regression reproduces high monthly flows much better because it is able to successfully capture a highly nonlinear relationship between the inputs and the output. The proposed methodology that uses a combination of artificial neural networks, spectral analysis and polynomial regression for deterministic and stochastic components can be applied to forecast monthly or seasonal flow rates.
Development of the Microstructure Based Stochastic Life Prediction Model
1993-08-01
the formulation of preliminary life prediction models[l]. In the microstruc - 3 tural characterization part of the program we have concentrated on the...microstructural models may be needed to describe behavior during different stages of fatigue life3 and intend to integrate them using Markov chain approach. -- B...precipitate phases present in the studied alloy the obtained diffraction patterns were compared with those found in the literature on 7075 and 7050 alloys. The
Ultra-Fast Data-Mining Hardware Architecture Based on Stochastic Computing
Oliver, Antoni; Alomar, Miquel L.
2015-01-01
Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274
Ultra-fast data-mining hardware architecture based on stochastic computing.
Morro, Antoni; Canals, Vincent; Oliver, Antoni; Alomar, Miquel L; Rossello, Josep L
2015-01-01
Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area.
Sampling-Based RBDO Using Stochastic Sensitivity and Dynamic Kriging for Broader Army Applications
2011-08-09
AND DYNAMIC KRIGING FOR BROADER ARMY APPLICATIONS K.K. Choi, Ikjin Lee, Liang Zhao, and Yoojeong Noh Department of Mechanical and Industrial...Thus, for broader Army applications, a sampling-based RBDO method using surrogate model has been developed recently. The Dynamic Kriging (DKG) method...Uuing Stochastic Sensitivity and Dynamic Kriging for Broader Army Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6
NASA Astrophysics Data System (ADS)
Maerker, Michael; Bolus, Michael
2014-05-01
We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less
Poisson-Vlasov in a strong magnetic field: A stochastic solution approach
Vilela Mendes, R.
2010-04-15
Stochastic solutions are obtained for the Maxwell-Vlasov equation in the approximation where magnetic field fluctuations are neglected and the electrostatic potential is used to compute the electric field. This is a reasonable approximation for plasmas in a strong external magnetic field. Both Fourier and configuration space solutions are constructed.
A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...
Han, Lim Ming; Haron, Zaiton; Yahya, Khairulzan; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri
2015-01-01
Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces.
NASA Astrophysics Data System (ADS)
Francipane, A.; Fatichi, S.; Ivanov, V. Y.; Noto, L. V.
2015-03-01
Hydrologic and geomorphic responses of watersheds to changes in climate are difficult to assess due to projection uncertainties and nonlinearity of the processes that are involved. Yet such assessments are increasingly needed and call for mechanistic approaches within a probabilistic framework. This study employs an integrated hydrology-geomorphology model, the Triangulated Irregular Network-based Real-time Integrated Basin Simulator (tRIBS)-Erosion, to analyze runoff and erosion sensitivity of seven semiarid headwater basins to projected climate conditions. The Advanced Weather Generator is used to produce two climate ensembles representative of the historic and future climate conditions for the Walnut Gulch Experimental Watershed located in the southwest U.S. The former ensemble incorporates the stochastic variability of the observed climate, while the latter includes the stochastic variability and the uncertainty of multimodel climate change projections. The ensembles are used as forcing for tRIBS-Erosion that simulates runoff and sediment basin responses leading to probabilistic inferences of future changes. The results show that annual precipitation for the area is generally expected to decrease in the future, with lower hourly intensities and similar daily rates. The smaller hourly rainfall generally results in lower mean annual runoff. However, a non-negligible probability of runoff increase in the future is identified, resulting from stochastic combinations of years with low and high runoff. On average, the magnitudes of mean and extreme events of sediment yield are expected to decrease with a very high probability. Importantly, the projected variability of annual sediment transport for the future conditions is comparable to that for the historic conditions, despite the fact that the former account for a much wider range of possible climate "alternatives." This result demonstrates that the historic natural climate variability of sediment yield is already so
The design and testing of a first-order logic-based stochastic modeling language.
Pless, Daniel J.; Rammohan, Roshan; Chakrabarti, Chayan; Luger, George F.
2005-06-01
We have created a logic-based, Turing-complete language for stochastic modeling. Since the inference scheme for this language is based on a variant of Pearl's loopy belief propagation algorithm, we call it Loopy Logic. Traditional Bayesian networks have limited expressive power, basically constrained to finite domains as in the propositional calculus. Our language contains variables that can capture general classes of situations, events and relationships. A first-order language is also able to reason about potentially infinite classes and situations using constructs such as hidden Markov models(HMMs). Our language uses an Expectation-Maximization (EM) type learning of parameters. This has a natural fit with the Loopy Belief Propagation used for inference since both can be viewed as iterative message passing algorithms. We present the syntax and theoretical foundations for our Loopy Logic language. We then demonstrate three examples of stochastic modeling and diagnosis that explore the representational power of the language. A mechanical fault detection example displays how Loopy Logic can model time-series processes using an HMM variant. A digital circuit example exhibits the probabilistic modeling capabilities, and finally, a parameter fitting example demonstrates the power for learning unknown stochastic values.
Suboptimal stochastic controller for an n-body spacecraft
NASA Technical Reports Server (NTRS)
Larson, V.
1973-01-01
The problem is studied of determining a stochastic optimal controller for an n-body spacecraft. The approach used in obtaining the stochastic controller involves the application, interpretation, and combination of advanced dynamical principles and the theoretical aspects of modern control theory. The stochastic controller obtained for a complicated model of a spacecraft uses sensor angular measurements associated with the base body to obtain smoothed estimates of the entire state vector, can be easily implemented, and enables system performance to be significantly improved.
[Stewart's acid-base approach].
Funk, Georg-Christian
2007-01-01
In addition to paCO(2), Stewart's acid base model takes into account the influence of albumin, inorganic phosphate, electrolytes and lactate on acid-base equilibrium. It allows a comprehensive and quantitative analysis of acid-base disorders. Particularly simultaneous and mixed metabolic acid-base disorders, which are common in critically ill patients, can be assessed. Stewart's approach is therefore a valuable tool in addition to the customary acid-base approach based on bicarbonate or base excess. However, some chemical aspects of Stewart's approach remain controversial.
Receptance-based structural health monitoring approach for bridge structures
NASA Astrophysics Data System (ADS)
Jang, S. A.; Spencer, B. F., Jr.
2009-03-01
A number of structural health monitoring strategies have been proposed recently that can be implemented in smart sensor networks. Many are based on changes in the experimentally determined flexibility matrix for the structure under consideration. However, the flexibility matrix contains only static information; much richer information is potentially available by considering the dynamic flexibility, or receptance, of the structure. Recently, the stochastic dynamic DLV method was proposed based on the changes in the dynamic flexibility matrix employing centrally collected output-only measurements. This paper extends the stochastic dynamic DLV method so that it can be implemented on a decentralized network of smart sensors. New damage indices are derived that provide robustness estimates of damage location. The smart sensor network is emulated with wired sensors to demonstrate the potential of the proposed method. The efficacy of the proposed approach is demonstrated experimentally using a model truss structure.
Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E
2016-07-20
In this paper, the event-based finite-horizon H∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v)-dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.
Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia
NASA Astrophysics Data System (ADS)
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2014-12-01
This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.
Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2014-12-04
This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.
Liver segmentation in MRI: A fully automatic method based on stochastic partitions.
López-Mir, F; Naranjo, V; Angulo, J; Alcañiz, M; Luna, L
2014-04-01
There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marker-controlled algorithm. To improve accuracy of selected contours, the gradient of the original image is successfully enhanced by applying a new variant of stochastic watershed. Moreover, a final classifier is performed in order to obtain the final liver mask. Optimal parameters of the method are tuned using a training dataset and then they are applied to the rest of studies (17 datasets). The obtained results (a Jaccard coefficient of 0.91 ± 0.02) in comparison to other methods demonstrate that the new variant of stochastic watershed is a robust tool for automatic segmentation of the liver in MRI.
Synchronization and stochastic resonance of the small-world neural network based on the CPG.
Lu, Qiang; Tian, Juan
2014-06-01
According to biological knowledge, the central nervous system controls the central pattern generator (CPG) to drive the locomotion. The brain is a complex system consisting of different functions and different interconnections. The topological properties of the brain display features of small-world network. The synchronization and stochastic resonance have important roles in neural information transmission and processing. In order to study the synchronization and stochastic resonance of the brain based on the CPG, we establish the model which shows the relationship between the small-world neural network (SWNN) and the CPG. We analyze the synchronization of the SWNN when the amplitude and frequency of the CPG are changed and the effects on the CPG when the SWNN's parameters are changed. And we also study the stochastic resonance on the SWNN. The main findings include: (1) When the CPG is added into the SWNN, there exists parameters space of the CPG and the SWNN, which can make the synchronization of the SWNN optimum. (2) There exists an optimal noise level at which the resonance factor Q gets its peak value. And the correlation between the pacemaker frequency and the dynamical response of the network is resonantly dependent on the noise intensity. The results could have important implications for biological processes which are about interaction between the neural network and the CPG.
A Path Integral Approach to Option Pricing with Stochastic Volatility: Some Exact Results
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.
1997-12-01
The Black-Scholes formula for pricing options on stocks and other securities has been generalized by Merton and Garman to the case when stock volatility is stochastic. The derivation of the price of a security derivative with stochastic volatility is reviewed starting from the first principles of finance. The equation of Merton and Garman is then recast using the path integration technique of theoretical physics. The price of the stock option is shown to be the analogue of the Schrödinger wavefunction of quantum mechanics and the exact Hamiltonian and Lagrangian of the system is obtained. The results of Hull and White are generalized to the case when stock price and volatility have non-zero correlation. Some exact results for pricing stock options for the general correlated case are derived.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Ding, Bo; Fang, Huajing
2017-03-31
This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system.
NASA Astrophysics Data System (ADS)
Cross, David; Onof, Christian; Bernardara, Pietro
2016-04-01
With the COP21 drawing to a close in December 2015, storms Desmond, Eva and Frank which swept across the UK and Ireland causing widespread flooding and devastation have acted as a timely reminder of the need for reliable estimation of rainfall extremes in a changing climate. The frequency and intensity of rainfall extremes are predicted to increase in the UK under anthropogenic climate change, and it is notable that the UK's 24 hour rainfall record of 316mm set in Seathwaite, Cumbria in 2009 was broken on the 5 December 2015 with 341mm by storm Desmond at Honister Pass also in Cumbria. Immediate analysis of the latter by the Centre for Ecology and Hydrology (UK) on the 8 December 2015 estimated that this is approximately equivalent to a 1300 year return period event (Centre for Ecology & Hydrology, 2015). Rainfall extremes are typically estimated using extreme value analysis and intensity duration frequency curves. This study investigates the potential for using stochastic rainfall simulation with mechanistic rectangular pulse models for estimation of extreme rainfall. These models have been used since the late 1980s to generate synthetic rainfall time-series at point locations for scenario analysis in hydrological studies and climate impact assessment at the catchment scale. Routinely they are calibrated to the full historical hyetograph and used for continuous simulation. However, their extremal performance is variable with a tendency to underestimate short duration (hourly and sub-hourly) rainfall extremes which are often associated with heavy convective rainfall in temporal climates such as the UK. Focussing on hourly and sub-hourly rainfall, a censored modelling approach is proposed in which rainfall below a low threshold is set to zero prior to model calibration. It is hypothesised that synthetic rainfall time-series are poor at estimating extremes because the majority of the training data are not representative of the climatic conditions which give rise to
NASA Astrophysics Data System (ADS)
Llopis-Albert, Carlos; Merigó, José M.; Xu, Yejun
2016-09-01
This paper presents an alternative approach to deal with seawater intrusion problems, that overcomes some of the limitations of previous works, by coupling the well-known SWI2 package for MODFLOW with a stochastic inverse model named GC method. On the one hand, the SWI2 allows a vertically integrated variable-density groundwater flow and seawater intrusion in coastal multi-aquifer systems, and a reduction in number of required model cells and the elimination of the need to solve the advective-dispersive transport equation, which leads to substantial model run-time savings. On the other hand, the GC method allows dealing with groundwater parameter uncertainty by constraining stochastic simulations to flow and mass transport data (i.e., hydraulic conductivity, freshwater heads, saltwater concentrations and travel times) and also to secondary information obtained from expert judgment or geophysical surveys, thus reducing uncertainty and increasing reliability in meeting the environmental standards. The methodology has been successfully applied to a transient movement of the freshwater-seawater interface in response to changing freshwater inflow in a two-aquifer coastal aquifer system, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques. The approach also allows partially overcoming the neglected diffusion and dispersion processes after the conditioning process since the uncertainty is reduced and results are closer to available data.
A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty
Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...
2016-11-21
Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less
A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty
Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; Newlands, Nathaniel K.
2016-11-21
Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach to address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.
Skull base approaches in neurosurgery
2010-01-01
The skull base surgery is one of the most demanding surgeries. There are different structures that can be injured easily, by operating in the skull base. It is very important for the neurosurgeon to choose the right approach in order to reach the lesion without harming the other intact structures. Due to the pioneering work of Cushing, Hirsch, Yasargil, Krause, Dandy and other dedicated neurosurgeons, it is possible to address the tumor and other lesions in the anterior, the mid-line and the posterior cranial base. With the transsphenoidal, the frontolateral, the pterional and the lateral suboccipital approach nearly every region of the skull base is exposable. In the current state many different skull base approaches are described for various neurosurgical diseases during the last 20 years. The selection of an approach may differ from country to country, e.g., in the United States orbitozygomaticotomy for special lesions of the anterior skull base or petrosectomy for clivus meningiomas, are found more frequently than in Europe. The reason for writing the review was the question: Are there keyhole approaches with which someone can deal with a vast variety of lesions in the neurosurgical field? In my opinion the different surgical approaches mentioned above cover almost 95% of all skull base tumors and lesions. In the following text these approaches will be described. These approaches are: 1) pterional approach 2) frontolateral approach 3) transsphenoidal approach 4) suboccipital lateral approach These approaches can be extended and combined with each other. In the following we want to enhance this philosophy. PMID:20602753
van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes
2014-01-01
The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and
Definition of scarcity-based water pricing policies through hydro-economic stochastic programming
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury
2014-05-01
One of the greatest current issues in integrated water resources management is to find and apply efficient and flexible management policies. Efficient management is needed to deal with increased water scarcity and river basin closure. Flexible policies are required to handle the stochastic nature of the water cycle. Scarcity-based pricing policies are one of the most promising alternatives, which deal not only with the supply costs, but also consider the opportunity costs associated with the allocation of water. The opportunity cost of water, which varies dynamically with space and time according to the imbalances between supply and demand, can be assessed using hydro-economic models. This contribution presents a procedure to design a pricing policy based on hydro-economic modelling and on the assessment of the Marginal Resource Opportunity Cost (MROC). Firstly, MROC time series associated to the optimal operation of the system are derived from a stochastic hydro-economic model. Secondly, these MROC time series must be post-processed in order to combine the different space-and-time MROC values into a single generalized indicator of the marginal opportunity cost of water. Finally, step scarcity-based pricing policies are determined after establishing a relationship between the MROC and the corresponding state of the system at the beginning of the time period (month). The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series and four agricultural demand sites currently managed using historical (XIVth century) rights. A hydro-economic model of the system has been built using stochastic dynamic programming. A reoptimization procedure is then implemented using SDP-derived benefit-to-go functions and historical flows to produce the time series of MROC values. MROC values are then aggregated and a statistical analysis is carried out to define (i) pricing policies and (ii) the relationship between MROC and
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna
2016-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-03-01
Many real-life systems, such as computer systems, manufacturing systems and logistics systems, are modelled as stochastic-flow networks (SFNs) to evaluate network reliability. Here, network reliability, defined as the probability that the network successfully transmits d units of data/commodity from an origin to a destination, is a performance indicator of the systems. Network reliability maximization is a particular objective, but is costly for many system supervisors. This article solves the multi-objective problem of reliability maximization and cost minimization by finding the optimal component assignment for SFN, in which a set of multi-state components is ready to be assigned to the network. A two-stage approach integrating Non-dominated Sorting Genetic Algorithm II and simple additive weighting are proposed to solve this problem, where network reliability is evaluated in terms of minimal paths and recursive sum of disjoint products. Several practical examples related to computer networks are utilized to demonstrate the proposed approach.
Reboul, Cyril F; Bonnet, Frederic; Elmlund, Dominika; Elmlund, Hans
2016-06-07
A critical step in the analysis of novel cryogenic electron microscopy (cryo-EM) single-particle datasets is the identification of homogeneous subsets of images. Methods for solving this problem are important for data quality assessment, ab initio 3D reconstruction, and analysis of population diversity due to the heterogeneous nature of macromolecules. Here we formulate a stochastic algorithm for identification of homogeneous subsets of images. The purpose of the method is to generate improved 2D class averages that can be used to produce a reliable 3D starting model in a rapid and unbiased fashion. We show that our method overcomes inherent limitations of widely used clustering approaches and proceed to test the approach on six publicly available experimental cryo-EM datasets. We conclude that, in each instance, ab initio 3D reconstructions of quality suitable for initialization of high-resolution refinement are produced from the cluster centers.
Partial derivative approach for option pricing in a simple stochastic volatility model
NASA Astrophysics Data System (ADS)
Montero, M.
2004-11-01
We study a market model in which the volatility of the stock may jump at a random time from a fixed value to another fixed value. This model has already been introduced in the literature. We present a new approach to the problem, based on partial differential equations, which gives a different perspective to the issue. Within our framework we can easily consider several forms for the market price of volatility risk, and interpret their financial meaning. We thus recover solutions previously mentioned in the literature as well as obtaining new ones.
A Stochastic Approach to Diffeomorphic Point Set Registration With Landmark Constraints
Kolesov, Ivan; Lee, Jehoon; Sharp, Gregory; Vela, Patricio; Tannenbaum, Allen
2016-01-01
This work presents a deformable point set registration algorithm that seeks an optimal set of radial basis functions to describe the registration. A novel, global optimization approach is introduced composed of simulated annealing with a particle filter based generator function to perform the registration. It is shown how constraints can be incorporated into this framework. A constraint on the deformation is enforced whose role is to ensure physically meaningful fields (i.e., invertible). Further, examples in which landmark constraints serve to guide the registration are shown. Results on 2D and 3D data demonstrate the algorithm’s robustness to noise and missing information. PMID:26761731
Zhang, Qin Fen; Karney, Professor Byran W.; Suo, Prof. Lisheng; Colombo, Dr. Andrew
2011-01-01
Abstract: The randomness of transient events, and the variability in factors which influence the magnitudes of resultant pressure fluctuations, ensures that waterhammer and surges in a pressurized pipe system are inherently stochastic. To bolster and improve reliability-based structural design, a stochastic model of transient pressures is developed for water conveyance systems in hydropower plants. The statistical characteristics and probability distributions of key factors in boundary conditions, initial states and hydraulic system parameters are analyzed based on a large record of observed data from hydro plants in China; and then the statistical characteristics and probability distributions of annual maximum waterhammer pressures are simulated using Monte Carlo method and verified by the analytical probabilistic model for a simplified pipe system. In addition, the characteristics (annual occurrence, sustaining period and probability distribution) of hydraulic loads for both steady and transient states are discussed. Illustrating with an example of penstock structural design, it is shown that the total waterhammer pressure should be split into two individual random variable loads: the steady/static pressure and the waterhammer pressure rise during transients; and that different partial load factors should be applied to each individual load to reflect its unique physical and stochastic features. Particularly, the normative load (usually the unfavorable value at 95-percentage point) for steady/static hydraulic pressure should be taken from the probability distribution of its maximum values during the pipe's design life, while for waterhammer pressure rise, as the second variable load, the probability distribution of its annual maximum values is used to determine its normative load.
A multiplier-based method of generating stochastic areal rainfall from point rainfalls
NASA Astrophysics Data System (ADS)
Ndiritu, J. G.
Catchment modelling for water resources assessment is still mainly based on rain gauge measurements as these are more easily available and cover longer periods than radar and satellite-based measurements. Rain gauges however measure the rain falling on an extremely small proportion of the catchment and the areal rainfall obtained from these point measurements are consequently substantially uncertain. These uncertainties in areal rainfall estimation are generally ignored and the need to assess their impact on catchment modelling and water resources assessment is therefore imperative. A method that stochastically generates daily areal rainfall from point rainfall using multiplicative perturbations as a means of dealing with these uncertainties is developed and tested on the Berg catchment in the Western Cape of South Africa. The differences in areal rainfall obtained by alternately omitting some of the rain gauges are used to obtain a population of plausible multiplicative perturbations. Upper bounds on the applicable perturbations are set to prevent the generation of unrealistically large rainfall and to obtain unbiased stochastic rainfall. The perturbations within the set bounds are then fitted into probability density functions to stochastically generate the perturbations to impose on areal rainfall. By using 100 randomly-initialized calibrations of the AWBM catchment model and Sequent Peak Analysis, the effects of incorporating areal rainfall uncertainties on storage-yield-reliability analysis are assessed. Incorporating rainfall uncertainty is found to reduce the required storage by up to 20%. Rainfall uncertainty also increases flow-duration variability considerably and reduces the median flow-duration values by an average of about 20%.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
NASA Astrophysics Data System (ADS)
Vlad, Marcel O.; Moran, Federico; Ross, John
2003-02-01
We introduce a stochastic approach for the computation of lifetime distributions of various chemical states in single-molecule kinetics, where the rate coefficients of the process are random functions of time. We consider a given realization for the rate coefficients and derive a partial differential equation for the instantaneous, fluctuating values of lifetime distributions and solve it by using the method of characteristics. The overall lifetime distributions are dynamic averages of the fluctuating lifetime distributions over all possible values of the rate coefficients. We develop methods of evaluating these dynamical averages for various types of stochastic processes, which describe the fluctuations of the rate coefficients. For a single molecule with two chemical states, of which one is fluorescent and the other not, the lifetime distributions are the same as the on and off time distributions, which are experimental observables. In this case we discuss in detail the relationships between intramolecular dynamics and the on and off time distributions. We develop methods for extracting quantitative and qualitative information about intramolecular fluctuations from measured values of on and of time distributions, for determining if the intramolecular fluctuations are long range or short range, and for the evaluation of the statistical properties of the fluctuating rate coefficients.
Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz
2015-01-01
Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5 Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5 Hz).
NASA Astrophysics Data System (ADS)
Dong, Bing; Ren, De-Qing; Zhang, Xi
2011-08-01
An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.
Detailed numerical investigation of the dissipative stochastic mechanics based neuron model.
Güler, Marifi
2008-10-01
Recently, a physical approach for the description of neuronal dynamics under the influence of ion channel noise was proposed in the realm of dissipative stochastic mechanics (Güler, Phys Rev E 76:041918, 2007). Led by the presence of a multiple number of gates in an ion channel, the approach establishes a viewpoint that ion channels are exposed to two kinds of noise: the intrinsic noise, associated with the stochasticity in the movement of gating particles between the inner and the outer faces of the membrane, and the topological noise, associated with the uncertainty in accessing the permissible topological states of open gates. Renormalizations of the membrane capacitance and of a membrane voltage dependent potential function were found to arise from the mutual interaction of the two noisy systems. The formalism therein was scrutinized using a special membrane with some tailored properties giving the Rose-Hindmarsh dynamics in the deterministic limit. In this paper, the resultant computational neuron model of the above approach is investigated in detail numerically for its dynamics using time-independent input currents. The following are the major findings obtained. The intrinsic noise gives rise to two significant coexisting effects: it initiates spiking activity even in some range of input currents for which the corresponding deterministic model is quiet and causes bursting in some other range of input currents for which the deterministic model fires tonically. The renormalization corrections are found to augment the above behavioral transitions from quiescence to spiking and from tonic firing to bursting, and, therefore, the bursting activity is found to take place in a wider range of input currents for larger values of the correction coefficients. Some findings concerning the diffusive behavior in the voltage space are also reported.
Reduction of stochastic conductance-based neuron models with time-scales separation.
Wainrib, Gilles; Thieullen, Michèle; Pakdaman, Khashayar
2012-04-01
We introduce a method for systematically reducing the dimension of biophysically realistic neuron models with stochastic ion channels exploiting time-scales separation. Based on a combination of singular perturbation methods for kinetic Markov schemes with some recent mathematical developments of the averaging method, the techniques are general and applicable to a large class of models. As an example, we derive and analyze reductions of different stochastic versions of the Hodgkin Huxley (HH) model, leading to distinct reduced models. The bifurcation analysis of one of the reduced models with the number of channels as a parameter provides new insights into some features of noisy discharge patterns, such as the bimodality of interspike intervals distribution. Our analysis of the stochastic HH model shows that, besides being a method to reduce the number of variables of neuronal models, our reduction scheme is a powerful method for gaining understanding on the impact of fluctuations due to finite size effects on the dynamics of slow fast systems. Our analysis of the reduced model reveals that decreasing the number of sodium channels in the HH model leads to a transition in the dynamics reminiscent of the Hopf bifurcation and that this transition accounts for changes in characteristics of the spike train generated by the model. Finally, we also examine the impact of these results on neuronal coding, notably, reliability of discharge times and spike latency, showing that reducing the number of channels can enhance discharge time reliability in response to weak inputs and that this phenomenon can be accounted for through the analysis of the reduced model.
Stochastic approach to the generalized Schrödinger equation: A method of eigenfunction expansion.
Tsuchida, Satoshi; Kuratsuji, Hiroshi
2015-05-01
Using a method of eigenfunction expansion, a stochastic equation is developed for the generalized Schrödinger equation with random fluctuations. The wave field ψ is expanded in terms of eigenfunctions: ψ=∑(n)a(n)(t)ϕ(n)(x), with ϕ(n) being the eigenfunction that satisfies the eigenvalue equation H(0)ϕ(n)=λ(n)ϕ(n), where H(0) is the reference "Hamiltonian" conventionally called the "unperturbed" Hamiltonian. The Langevin equation is derived for the expansion coefficient a(n)(t), and it is converted to the Fokker-Planck (FP) equation for a set {a(n)} under the assumption of Gaussian white noise for the fluctuation. This procedure is carried out by a functional integral, in which the functional Jacobian plays a crucial role in determining the form of the FP equation. The analyses are given for the FP equation by adopting several approximate schemes.
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Solution of stochastic media transport problems using a numerical quadrature-based method
Pautz, S. D.; Franke, B. C.; Prinja, A. K.; Olson, A. J.
2013-07-01
We present a new conceptual framework for analyzing transport problems in random media. We decompose such problems into stratified subproblems according to the number of material pseudo-interfaces within realizations. For a given subproblem we assign pseudo-interface locations in each realization according to product quadrature rules, which allows us to deterministically generate a fixed number of realizations. Quadrature integration of the solutions of these realizations thus approximately solves each subproblem; the weighted superposition of solutions of the subproblems approximately solves the general stochastic media transport problem. We revisit some benchmark problems to determine the accuracy and efficiency of this approach in comparison to randomly generated realizations. We find that this method is very accurate and fast when the number of pseudo-interfaces in a problem is generally low, but that these advantages quickly degrade as the number of pseudo-interfaces increases. (authors)
2016-01-01
In today's world, Public expenditures on health are one of the most important issues for governments. These increased expenditures are putting pressure on public budgets. Therefore, health policy makers have focused on the performance of their health systems and many countries have introduced reforms to improve the performance of their health systems. This study investigates the most important determinants of healthcare efficiency for OECD countries using second stage approach for Bayesian Stochastic Frontier Analysis (BSFA). There are two steps in this study. First we measure 29 OECD countries' healthcare efficiency by BSFA using the data from the OECD Health Database. At second stage, we expose the multiple relationships between the healthcare efficiency and characteristics of healthcare systems across OECD countries using Bayesian beta regression. PMID:27118987
Unifying Vertical and Nonvertical Evolution: A Stochastic ARG-based Framework
Bloomquist, Erik W.; Suchard, Marc A.
2010-01-01
Evolutionary biologists have introduced numerous statistical approaches to explore nonvertical evolution, such as horizontal gene transfer, recombination, and genomic reassortment, through collections of Markov-dependent gene trees. These tree collections allow for inference of nonvertical evolution, but only indirectly, making findings difficult to interpret and models difficult to generalize. An alternative approach to explore nonvertical evolution relies on phylogenetic networks. These networks provide a framework to model nonvertical evolution but leave unanswered questions such as the statistical significance of specific nonvertical events. In this paper, we begin to correct the shortcomings of both approaches by introducing the “stochastic model for reassortment and transfer events” (SMARTIE) drawing upon ancestral recombination graphs (ARGs). ARGs are directed graphs that allow for formal probabilistic inference on vertical speciation events and nonvertical evolutionary events. We apply SMARTIE to phylogenetic data. Because of this, we can typically infer a single most probable ARG, avoiding coarse population dynamic summary statistics. In addition, a focus on phylogenetic data suggests novel probability distributions on ARGs. To make inference with our model, we develop a reversible jump Markov chain Monte Carlo sampler to approximate the posterior distribution of SMARTIE. Using the BEAST phylogenetic software as a foundation, the sampler employs a parallel computing approach that allows for inference on large-scale data sets. To demonstrate SMARTIE, we explore 2 separate phylogenetic applications, one involving pathogenic Leptospirochete and the other Saccharomyces. PMID:20525618
NASA Astrophysics Data System (ADS)
Della Lunga, Giovanni; Pezzato, Michela; Baratto, Maria Camilla; Pogni, Rebecca; Basosi, Riccardo
2003-09-01
In the slow-motion region, ESR spectra cannot be expressed as a sum of simple Lorentzian lines. Studies of Freed and co-workers, on nitroxides in liquids gained information on the microscopic models of rotational dynamics, relying much on computer programs for simulation of ESR spectra based on the stochastic Liouville equation (SLE). However, application of Freed's method to copper system of biological interest has been for a long time precluded by lack of a full program able to simulate ESR spectra containing more than one hyperfine interaction. Direct extension of the Freed's approach in order to include superhyperfine interaction is not difficult from a theoretical point of view but the resulting algorithm is problematical because it leads to substantial increase in the dimensions of the matrix related to the spin-hamiltonian operator. In this paper preliminary results of a new program, written in C, which includes the superhyperfine interactions are presented. This preliminary version of the program does not take into account a restoring potential, so it can be used only in isotropic diffusion conditions. A comparison with an approximate method previously developed in our laboratory, based on a post-convolution approach, is discussed.
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
2016-01-01
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total cost of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
2016-01-01
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
NASA Astrophysics Data System (ADS)
Zhong, Kai; Zhu, Song; Yang, Qiqi
2016-11-01
In recent years, the stability problems of memristor-based neural networks have been studied extensively. This paper not only takes the unavoidable noise into consideration but also investigates the global exponential stability of stochastic memristor-based neural networks with time-varying delays. The obtained criteria are essentially new and complement previously known ones, which can be easily validated with the parameters of system itself. In addition, the study of the nonlinear dynamics for the addressed neural networks may be helpful in qualitative analysis for general stochastic systems. Finally, two numerical examples are provided to substantiate our results.
Radhika, Thirunavukkarasu; Nagamani, Gnaneswaran
2016-01-01
In this paper, based on the knowledge of memristor-based recurrent neural networks (MRNNs), the model of the stochastic MRNNs with discrete and distributed delays is established. In real nervous systems and in the implementation of very large-scale integration (VLSI) circuits, noise is unavoidable, which leads to the stochastic model of the MRNNs. In this model, the delay interval is decomposed into two subintervals by using the tuning parameter α such that 0 < α < 1. By constructing proper Lyapunov-Krasovskii functional and employing direct delay decomposition technique, several sufficient conditions are given to guarantee the dissipativity and passivity of the stochastic MRNNs with discrete and distributed delays in the sense of Filippov solutions. Using the stochastic analysis theory and Itô's formula for stochastic differential equations, we establish sufficient conditions for dissipativity criterion. The dissipativity and passivity conditions are presented in terms of linear matrix inequalities, which can be easily solved by using Matlab Tools. Finally, three numerical examples with simulations are presented to demonstrate the effectiveness of the theoretical results.
Stochastic Dynamical Model of a Growing Citation Network Based on a Self-Exciting Point Process
NASA Astrophysics Data System (ADS)
Golosovsky, Michael; Solomon, Sorin
2012-08-01
We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40 195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.
Jiang, Kuosheng.; Xu, Guanghua.; Liang, Lin.; Tao, Tangfei.; Gu, Fengshou.
2014-01-01
In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test. PMID:25076220
Developing Itô stochastic differential equation models for neuronal signal transduction pathways.
Manninen, Tiina; Linne, Marja-Leena; Ruohonen, Keijo
2006-08-01
Mathematical modeling and simulation of dynamic biochemical systems are receiving considerable attention due to the increasing availability of experimental knowledge of complex intracellular functions. In addition to deterministic approaches, several stochastic approaches have been developed for simulating the time-series behavior of biochemical systems. The problem with stochastic approaches, however, is the larger computational time compared to deterministic approaches. It is therefore necessary to study alternative ways to incorporate stochasticity and to seek approaches that reduce the computational time needed for simulations, yet preserve the characteristic behavior of the system in question. In this work, we develop a computational framework based on the Itô stochastic differential equations for neuronal signal transduction networks. There are several different ways to incorporate stochasticity into deterministic differential equation models and to obtain Itô stochastic differential equations. Two of the developed models are found most suitable for stochastic modeling of neuronal signal transduction. The best models give stable responses which means that the variances of the responses with time are not increasing and negative concentrations are avoided. We also make a comparative analysis of different kinds of stochastic approaches, that is the Itô stochastic differential equations, the chemical Langevin equation, and the Gillespie stochastic simulation algorithm. Different kinds of stochastic approaches can be used to produce similar responses for the neuronal protein kinase C signal transduction pathway. The fine details of the responses vary slightly, depending on the approach and the parameter values. However, when simulating great numbers of chemical species, the Gillespie algorithm is computationally several orders of magnitude slower than the Itô stochastic differential equations and the chemical Langevin equation. Furthermore, the chemical
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
NASA Astrophysics Data System (ADS)
Hardebol, Nico; Bertotti, Giovanni; Weltje, Gert Jan
2014-05-01
We propose the description of fracture-fault systems in terms of a multi-scale hierarchical network. In most generic form, such arrangement is referred to as a structural fabric and applicable across the length scale spectrum. The statistical characterisation combines the fracture length and orientation distributions and intersection-termination relationships. The aim is a parameterised description of the network that serves as input in stochastic network simulations that should reproduce the essence of natural fracture networks and encompass its variability. The quality of the stochastically generated fabric is determined by comparison with deterministic descriptions on which the model parameterisation is based. Both the deterministic and stochastic derived fracture network description can serve as input in fluid flow or mechanical simulations that accounts explicitly for the discrete features and the response of the system can be compared. The deterministic description of our current study in the framework of tight gas reservoirs is obtained from coastal pavements that expose a horizontal slice through a fracture-fault network system in fine grained sediments in Yorkshire, UK. Fracture hierarchies have often been described at one observation scale as a two-tier hierarchy in terms of 1st order systematic joints and 2nd order cross-joints. New in our description is the bridging between km-sized faults with notable displacement down to sub-meter scale shear and opening mode fractures. This study utilized a drone to obtain cm-resolution imagery of pavements from ~30m altitude and the large coverage up to 1-km by flying at a ~80m. This unique set of images forms the basis for the digitizing of the fracture-fault pattern and helped determining the nested nature of the network as well as intersection and abutment relationships. Fracture sets were defined from the highest to lowest hierarchical order and probability density functions were defined for the length
Stochastic parametrization of multiscale processes using a dual-grid approach.
Shutts, Glenn; Allen, Thomas; Berner, Judith
2008-07-28
Some speculative proposals are made for extending current stochastic sub-gridscale parametrization methods using the techniques adopted from the field of computer graphics and flow visualization. The idea is to emulate sub-filter-scale physical process organization and time evolution on a fine grid and couple the implied coarse-grained tendencies with a forecast model. A two-way interaction is envisaged so that fine-grid physics (e.g. deep convective clouds) responds to forecast model fields. The fine-grid model may be as simple as a two-dimensional cellular automaton or as computationally demanding as a cloud-resolving model similar to the coupling strategy envisaged in 'super-parametrization'. Computer codes used in computer games and visualization software illustrate the potential for cheap but realistic simulation where emphasis is placed on algorithmic stability and visual realism rather than pointwise accuracy in a predictive sense. In an ensemble prediction context, a computationally cheap technique would be essential and some possibilities are outlined. An idealized proof-of-concept simulation is described, which highlights technical problems such as the nature of the coupling.
On the stochastic approach to inflation and the initial conditions in the universe
NASA Astrophysics Data System (ADS)
Pollock, M. D.
1988-03-01
By the application of stochastic methods to a theory in which a potential V(ø) causes a period of quasi-exponential expansion of the universe, an expression for the probability distribution P(V) appropriate for chaotic inflation has recently been derived. The method was developed by Starobinsky and by Linde. Beyond some critical point øc, long-wavelength quantum fluctuations δø ~H/2π cannot be ignored. The effect of these fluctuation in general relativity for values of ø such that V(ø)>V(ø) has been considered by Linde, who concluded that most of the present universe arises as a result of expansion of domains with a domains with a maximum possible value of ø, such that V(ømax ~ mp4. We obtain the corresponding expression for P in a broken-symmetry theory of gravity, in which the newtonian gravitational constant is replaced by G = (8πɛø2)-1, and also for a theory which includes higher-derivative terms R2 = γR2 + βR2 1n(R/μ2), so that the trace anomaly is Tanom ~βR2 , in which an effective inflation field øe can be defined as øe2 = 24γR. Conclusions analogous to those of Linde can be drawn in both these theories. Present address: Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Bombay 400.005, India.
Kim, Oleg; McMurdy, John; Jay, Gregory; Lines, Collin; Crawford, Gregory; Alber, Mark
2014-01-01
Abstract A combination of stochastic photon propagation model in a multilayered human eyelid tissue and reflectance spectroscopy was used to study palpebral conjunctiva spectral reflectance for hemoglobin (Hgb) determination. The developed model is the first biologically relevant model of eyelid tissue, which was shown to provide very good approximation to the measured spectra. Tissue optical parameters were defined using previous histological and microscopy studies of a human eyelid. After calibration of the model parameters the responses of reflectance spectra to Hgb level and blood oxygenation variations were calculated. The stimulated reflectance spectra in adults with normal and low Hgb levels agreed well with experimental data for Hgb concentrations from 8.1 to 16.7 g/dL. The extracted Hgb levels were compared with in vitro Hgb measurements. The root mean square error of cross‐validation was 1.64 g/dL. The method was shown to provide 86% sensitivity estimates for clinically diagnosed anemia cases. A combination of the model with spectroscopy measurements provides a new tool for noninvasive study of human conjunctiva to aid in diagnosing blood disorders such as anemia. PMID:24744871
Stochastic approach to diffusion inside the chaotic layer of a resonance.
Mestre, Martín F; Bazzani, Armando; Cincotta, Pablo M; Giordano, Claudia M
2014-01-01
We model chaotic diffusion in a symplectic four-dimensional (4D) map by using the result of a theorem that was developed for stochastically perturbed integrable Hamiltonian systems. We explicitly consider a map defined by a free rotator (FR) coupled to a standard map (SM). We focus on the diffusion process in the action I of the FR, obtaining a seminumerical method to compute the diffusion coefficient. We study two cases corresponding to a thick and a thin chaotic layer in the SM phase space and we discuss a related conjecture stated in the past. In the first case, the numerically computed probability density function for the action I is well interpolated by the solution of a Fokker-Planck (FP) equation, whereas it presents a nonconstant time shift with respect to the concomitant FP solution in the second case suggesting the presence of an anomalous diffusion time scale. The explicit calculation of a diffusion coefficient for a 4D symplectic map can be useful to understand the slow diffusion observed in celestial mechanics and accelerator physics.
A Stochastic Foundation of the Approach to Equilibrium of Classical and Quantum Gases
NASA Astrophysics Data System (ADS)
Costantini, D.; Garibaldi, U.
The Ehrenfest urn model is one of the most instructive models in the whole of Physics. It was thought to give a qualitative account for notions like reversibility, periodicity and tendency to equilibrium. The model, often referred to as the Ehrenfest dog-flea model, is mentioned in almost every textbook of probability, stochastic processes and statistical physics. Ehrenfest's model must not be limited to classical particles, but it can be extended to quantum particles. We make this extention in a purely probabilistic way. We do not refer to notions like (in)distinguishability that, in our opinion, have an epistemological and physical status far from clear. For all types of particles, we deduce the equilibrium probabilities in a purely probabilistic way. To accomplish our goal, we start by considering a set of probability conditions. On this basis, we deduce the formulae of creation and destruction probabilities for classical particles, bosons and fermions. These enable the deduction of the transition probabilities we are interested in. Via the master equation, these transition probabilities enable us to derive the equilibrium distributions.
NASA Astrophysics Data System (ADS)
Wang, Guochao; Wang, Jun
2017-01-01
We make an approach on investigating the fluctuation behaviors of financial volatility duration dynamics. A new concept of volatility two-component range intensity (VTRI) is developed, which constitutes the maximal variation range of volatility intensity and shortest passage time of duration, and can quantify the investment risk in financial markets. In an attempt to study and describe the nonlinear complex properties of VTRI, a random agent-based financial price model is developed by the finite-range interacting biased voter system. The autocorrelation behaviors and the power-law scaling behaviors of return time series and VTRI series are investigated. Then, the complexity of VTRI series of the real markets and the proposed model is analyzed by Fuzzy entropy (FuzzyEn) and Lempel-Ziv complexity. In this process, we apply the cross-Fuzzy entropy (C-FuzzyEn) to study the asynchrony of pairs of VTRI series. The empirical results reveal that the proposed model has the similar complex behaviors with the actual markets and indicate that the proposed stock VTRI series analysis and the financial model are meaningful and feasible to some extent.
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Wu, Zhizhang Huang, Zhongyi
2016-07-15
In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.
NASA Astrophysics Data System (ADS)
Ross, D. K.; Moreau, William
1995-08-01
We investigate stochastic gravity as a potentially fruitful avenue for studying quantum effects in gravity. Following the approach of stochastic electrodynamics ( sed), as a representation of the quantum gravity vacuum we construct a classical state of isotropic random gravitational radiation, expressed as a spin-2 field,h µυ (x), composed of plane waves of random phase on a flat spacetime manifold. Requiring Lorentz invariance leads to the result that the spectral composition function of the gravitational radiation,h(ω), must be proportional to 1/ω 2. The proportionality constant is determined by the Planck condition that the energy density consist ofħω/2 per normal mode, and this condition sets the amplitude scale of the random gravitational radiation at the order of the Planck length, giving a spectral composition functionh(ω) =√16πc 2Lp/ω2. As an application of stochastic gravity, we investigate the Davies-Unruh effect. We calculate the two-point correlation function (R iojo(Oτ-δτ/2)R kolo(O,τ+δτ/2)) of the measureable geodesic deviation tensor field,R iojo, for two situations: (i) at a point detector uniformly accelerating through the random gravitational radiation, and (ii) at an inertial detector in a heat bath of the random radiation at a finite temperature. We find that the two correlation functions agree to first order inaδτ/c provided that the temperature and acceleration satisfy the relationkT=ħa/2πc.
NASA Astrophysics Data System (ADS)
Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron
2014-05-01
Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling
Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.
Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu
2016-06-17
Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.
Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid
Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu
2016-01-01
Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO2 emissions and operation costs in UCS and LCS. PMID:27322281
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-12-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Stochastic longshore current dynamics
NASA Astrophysics Data System (ADS)
Restrepo, Juan M.; Venkataramani, Shankar
2016-12-01
We develop a stochastic parametrization, based on a 'simple' deterministic model for the dynamics of steady longshore currents, that produces ensembles that are statistically consistent with field observations of these currents. Unlike deterministic models, stochastic parameterization incorporates randomness and hence can only match the observations in a statistical sense. Unlike statistical emulators, in which the model is tuned to the statistical structure of the observation, stochastic parametrization are not directly tuned to match the statistics of the observations. Rather, stochastic parameterization combines deterministic, i.e physics based models with stochastic models for the "missing physics" to create hybrid models, that are stochastic, but yet can be used for making predictions, especially in the context of data assimilation. We introduce a novel measure of the utility of stochastic models of complex processes, that we call consistency of sensitivity. A model with poor consistency of sensitivity requires a great deal of tuning of parameters and has a very narrow range of realistic parameters leading to outcomes consistent with a reasonable spectrum of physical outcomes. We apply this metric to our stochastic parametrization and show that, the loss of certainty inherent in model due to its stochastic nature is offset by the model's resulting consistency of sensitivity. In particular, the stochastic model still retains the forward sensitivity of the deterministic model and hence respects important structural/physical constraints, yet has a broader range of parameters capable of producing outcomes consistent with the field data used in evaluating the model. This leads to an expanded range of model applicability. We show, in the context of data assimilation, the stochastic parametrization of longshore currents achieves good results in capturing the statistics of observation that were not used in tuning the model.
NASA Astrophysics Data System (ADS)
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-01-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
Galka, Andreas; Ozaki, Tohru; Muhle, Hiltrud; Stephani, Ulrich; Siniatchkin, Michael
2008-06-01
We discuss a model for the dynamics of the primary current density vector field within the grey matter of human brain. The model is based on a linear damped wave equation, driven by a stochastic term. By employing a realistically shaped average brain model and an estimate of the matrix which maps the primary currents distributed over grey matter to the electric potentials at the surface of the head, the model can be put into relation with recordings of the electroencephalogram (EEG). Through this step it becomes possible to employ EEG recordings for the purpose of estimating the primary current density vector field, i.e. finding a solution of the inverse problem of EEG generation. As a technique for inferring the unobserved high-dimensional primary current density field from EEG data of much lower dimension, a linear state space modelling approach is suggested, based on a generalisation of Kalman filtering, in combination with maximum-likelihood parameter estimation. The resulting algorithm for estimating dynamical solutions of the EEG inverse problem is applied to the task of localising the source of an epileptic spike from a clinical EEG data set; for comparison, we apply to the same task also a non-dynamical standard algorithm.
NASA Astrophysics Data System (ADS)
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-03-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
NASA Astrophysics Data System (ADS)
Yiotis, Andreas G.; Kainourgiakis, Michael E.; Charalambopoulou, Georgia C.; Stubos, Athanassios K.
2016-07-01
A novel process-based methodology is proposed for the stochastic reconstruction and accurate characterisation of Carbon fiber-based matrices, which are commonly used as Gas Diffusion Layers in Proton Exchange Membrane Fuel Cells. The modeling approach is efficiently complementing standard methods used for the description of the anisotropic deposition of carbon fibers, with a rigorous model simulating the spatial distribution of the graphitized resin that is typically used to enhance the structural properties and thermal/electrical conductivities of the composite Gas Diffusion Layer materials. The model uses as input typical pore and continuum scale properties (average porosity, fiber diameter, resin content and anisotropy) of such composites, which are obtained from X-ray computed microtomography measurements on commercially available carbon papers. This information is then used for the digital reconstruction of realistic composite fibrous matrices. By solving the corresponding conservation equations at the microscale in the obtained digital domains, their effective transport properties, such as Darcy permeabilities, effective diffusivities, thermal/electrical conductivities and void tortuosity, are determined focusing primarily on the effects of medium anisotropy and resin content. The calculated properties are matching very well with those of Toray carbon papers for reasonable values of the model parameters that control the anisotropy of the fibrous skeleton and the materials resin content.
NASA Astrophysics Data System (ADS)
Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.
2009-04-01
Drought is one of the major weather driven natural hazards, which has most harm impacts on environment, agricultural and hydrological factors than the other hazards. In spite of the fact that Hungary - that country is situated in Central Europe - belongs to the continental climate zone (influenced by Atlantic and Mediterranean streams) and this weather conditions should be favourable for agricultural production, the drought is a serious risk factor in Hungary, especially on the so called "Great Hungarian Plain", which area has been hit by severe drought events. These drought events encouraged the Ministry of Environment and Water of Hungary to embark on a countrywide drought planning programme to coordinate drought planning efforts throughout the country, to ensure that available water is used efficiently and to provide guidance on how drought planning can be accomplished. With regard to this plan, it is indispensable to analyze the regional drought frequency and duration in the target region of the programme as fundamental information for the further works. According to these aims, first we initiated a methodological development for simulating drought in a non-contributing area. As a result of this work, it has been agreed that the most appropriate model structure for our purposes using a spatially distributed physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model embedded into a Markov Chain-Monte Carlo (MCMC) algorithm for estimate multi-year drought frequency and duration. In this framework: - the spatially distributed SVAT component simulates all the fundamental SVAT processes (such as: interception, snow-accumulation and melting, infiltration, water uptake by vegetation and evapotranspiration, vertical and horizontal distribution of soil moisture, etc.) taking the groundwater table as lower, and the hydrometeorological fields as upper boundary conditions into account; - and the MCMC based stochastic component generates time series of daily weather
Matsumoto, Tomotaka; Mineta, Katsuhiko; Osada, Naoki; Araki, Hitoshi
2015-01-01
Recent studies suggest the existence of a stochasticity in gene expression (SGE) in many organisms, and its non-negligible effect on their phenotype and fitness. To date, however, how SGE affects the key parameters of population genetics are not well understood. SGE can increase the phenotypic variation and act as a load for individuals, if they are at the adaptive optimum in a stable environment. On the other hand, part of the phenotypic variation caused by SGE might become advantageous if individuals at the adaptive optimum become genetically less-adaptive, for example due to an environmental change. Furthermore, SGE of unimportant genes might have little or no fitness consequences. Thus, SGE can be advantageous, disadvantageous, or selectively neutral depending on its context. In addition, there might be a genetic basis that regulates magnitude of SGE, which is often referred to as “modifier genes,” but little is known about the conditions under which such an SGE-modifier gene evolves. In the present study, we conducted individual-based computer simulations to examine these conditions in a diploid model. In the simulations, we considered a single locus that determines organismal fitness for simplicity, and that SGE on the locus creates fitness variation in a stochastic manner. We also considered another locus that modifies the magnitude of SGE. Our results suggested that SGE was always deleterious in stable environments and increased the fixation probability of deleterious mutations in this model. Even under frequently changing environmental conditions, only very strong natural selection made SGE adaptive. These results suggest that the evolution of SGE-modifier genes requires strict balance among the strength of natural selection, magnitude of SGE, and frequency of environmental changes. However, the degree of dominance affected the condition under which SGE becomes advantageous, indicating a better opportunity for the evolution of SGE in different genetic
Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.
2016-01-01
The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.
Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J
2016-12-01
This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.
NASA Astrophysics Data System (ADS)
Papadopoulos, Vissarion; Kalogeris, Ioannis
2016-05-01
The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.
Stochastic P-bifurcation and stochastic resonance in a noisy bistable fractional-order system
NASA Astrophysics Data System (ADS)
Yang, J. H.; Sanjuán, Miguel A. F.; Liu, H. G.; Litak, G.; Li, X.
2016-12-01
We investigate the stochastic response of a noisy bistable fractional-order system when the fractional-order lies in the interval (0, 2]. We focus mainly on the stochastic P-bifurcation and the phenomenon of the stochastic resonance. We compare the generalized Euler algorithm and the predictor-corrector approach which are commonly used for numerical calculations of fractional-order nonlinear equations. Based on the predictor-corrector approach, the stochastic P-bifurcation and the stochastic resonance are investigated. Both the fractional-order value and the noise intensity can induce an stochastic P-bifurcation. The fractional-order may lead the stationary probability density function to turn from a single-peak mode to a double-peak mode. However, the noise intensity may transform the stationary probability density function from a double-peak mode to a single-peak mode. The stochastic resonance is investigated thoroughly, according to the linear and the nonlinear response theory. In the linear response theory, the optimal stochastic resonance may occur when the value of the fractional-order is larger than one. In previous works, the fractional-order is usually limited to the interval (0, 1]. Moreover, the stochastic resonance at the subharmonic frequency and the superharmonic frequency are investigated respectively, by using the nonlinear response theory. When it occurs at the subharmonic frequency, the resonance may be strong and cannot be ignored. When it occurs at the superharmonic frequency, the resonance is weak. We believe that the results in this paper might be useful for the signal processing of nonlinear systems.
Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance
NASA Astrophysics Data System (ADS)
Kato, H.; Ito, K.
2009-01-01
A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.
Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France
2011-09-20
In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.
NASA Astrophysics Data System (ADS)
Wang, Tingting; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun
2016-05-01
Many real-world data can be represented as dynamic networks which are the evolutionary networks with timestamps. Analyzing dynamic attributes is important to understanding the structures and functions of these complex networks. Especially, studying the influential nodes is significant to exploring and analyzing networks. In this paper, we propose a method to identify influential nodes in dynamic social networks based on identifying such nodes in the temporal communities which make up the dynamic networks. Firstly, we detect the community structures of all the snapshot networks based on the degree-corrected stochastic block model (DCBM). After getting the community structures, we capture the evolution of every community in the dynamic network by the extended Jaccard’s coefficient which is defined to map communities among all the snapshot networks. Then we obtain the initial influential nodes of the dynamic network and aggregate them based on three widely used centrality metrics. Experiments on real-world and synthetic datasets demonstrate that our method can identify influential nodes in dynamic networks accurately, at the same time, we also find some interesting phenomena and conclusions for those that have been validated in complex network or social science.
A stochastic HMM-based forecasting model for fuzzy time series.
Li, Sheng-Tun; Cheng, Yi-Chung
2010-10-01
Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.
Definition of efficient scarcity-based water pricing policies through stochastic programming
NASA Astrophysics Data System (ADS)
Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.
2015-09-01
Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares River basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.
Definition of efficient scarcity-based water pricing policies through stochastic programming
NASA Astrophysics Data System (ADS)
Macian-Sorribes, H.; Pulido-Velazquez, M.; Tilmant, A.
2015-01-01
Finding ways to improve the efficiency in water usage is one of the most important challenges in integrated water resources management. One of the most promising solutions is the use of scarcity-based pricing policies. This contribution presents a procedure to design efficient pricing policies based on the opportunity cost of water at the basin scale. Time series of the marginal value of water are obtained using a stochastic hydro-economic model. Those series are then post-processed to define step pricing policies, which depend on the state of the system at each time step. The case study of the Mijares river basin system (Spain) is used to illustrate the method. The results show that the application of scarcity-based pricing policies increases the economic efficiency of water use in the basin, allocating water to the highest-value uses and generating an incentive for water conservation during the scarcity periods. The resulting benefits are close to those obtained with the economically optimal decisions.
Stochastic Physicochemical Dynamics
NASA Astrophysics Data System (ADS)
Tsekov, R.
2001-02-01
fluctuations. The range of validity of the Boltzmann-Einstein principle is also discussed and a generalized alternative is proposed. Both expressions coincide in the small fluctuation limit, providing a normal distribution density. Fluctuation Stability of Thin Liquid Films: Memory effect of Brownian motion in an incompressible fluid is studied. The reasoning is based on the Mori-Zwanzig formalism and a new formulation of the Langevin force as a result of collisions between an effective and the Brownian particles. Thus, the stochastic force autocorrelation function with finite dispersion and the corresponding Brownian particle velocity autocorrelation function are obtained. It is demonstrated that the dynamic structure is very important for the rate of drainage of a thin liquid film and it can be effectively taken into account by a dynamic fractal dimension. It is shown that the latter is a powerful tool for description of the film drainage and classifies all the known results from the literature. The obtained general expression for the thinning rate is a heuristic one and predicts variety of drainage models, which are even difficult to simulate in practice. It is a typical example of a scaling law, which explains the origin of the complicate dependence of the thinning rate on the film radius. On the basis of the theory of stochastic processes the evolution of the spatial correlation function of the surface waves on a thin liquid film as well as the corresponding root mean square amplitude A(t) and number of uncorrelated subdomains N(t) are obtained. A formulation of the life time of unstable nonthinning films is proposed, based on the evolution of A and N. It is shown that the presence of uncorrelated subdomains shortens the life time of the film. Some numerical results for A(t) and N(t) at different film thicknesses h and areas S, are demonstrated, taking into account only van der Waals and capillary forces. Resonant Diffusion in Molecular Solid Structures: A new approach to
Airoldi, Edoardo M; Toulis, Panos
2015-07-01
Estimation with large amounts of data can be facilitated by stochastic gradient methods, in which model parameters are updated sequentially using small batches of data at each step. Here, we review early work and modern results that illustrate the statistical properties of these methods, including convergence rates, stability, and asymptotic bias and variance. We then overview modern applications where these methods are useful, ranging from an online version of the EM algorithm to deep learning. In light of these results, we argue that stochastic gradient methods are poised to become benchmark principled estimation procedures for large data sets, especially those in the family of stable proximal methods, such as implicit stochastic gradient descent.
On the use of stochastic process-based methods for the analysis of hyperspectral data
NASA Technical Reports Server (NTRS)
Landgrebe, David A.
1992-01-01
Further development in remote sensing technology requires refinement of information system design aspects, i.e., the ability to specify precisely the data to collect and the means to extract increasing amounts of information from the increasingly rich and complex data stream created. One of the principal directions of advance is that data from much larger numbers of spectral bands can be collected, but with significantly increased signal-to-noise ratio. The theory of stochastic or random processes may be applied to the modeling of second-order variations. A multispectral data set with a large number of spectral bands is analyzed using standard pattern recognition techniques. The data were classified using first a single spectral feature, then two, and continuing on with greater and greater numbers of features. Three different classification schemes are used: a standard maximum likelihood Gaussian scheme; the same approach with the mean values of all classes adjusted to be the same; and the use of a minimum distance to means scheme such that mean differences are used.
Stochastic description of quantum Brownian dynamics
NASA Astrophysics Data System (ADS)
Yan, Yun-An; Shao, Jiushu
2016-08-01
Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems
Research in Stochastic Processes.
1985-09-01
appear. G. Kallianpur, Finitely additive approach to nonlinear filtering, Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to...Nov. 85, in Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to appear. i. Preparation T. Hsing, Extreme value theory for...1507 Carroll, R.J., Spiegelman, C.H., Lan, K.K.G., Bailey , K.T. and Abbott, R.D., Errors in-variables for binary regression models, Aug.82. 1508
A general stochastic approach to unavailability analysis of standby safety systems
Van Der Weide, H.; Pandey, M. D.
2013-07-01
The paper presents a general analytical framework to analyze unavailability caused by latent failures in standby safety systems used in nuclear plants. The proposed approach is general in a sense that it encompasses a variety of inspection and maintenance policies and relaxes restrictive assumptions regarding the distributions of time to failure (or aging) and duration of repair. A key result of the paper is a general integral equation for point unavailability, which can be tailored to any specific maintenance policy. (authors)
NASA Astrophysics Data System (ADS)
Song, Jiyun; Wang, Zhi-Hua
2016-05-01
Urban land-atmosphere interactions can be captured by numerical modeling framework with coupled land surface and atmospheric processes, while the model performance depends largely on accurate input parameters. In this study, we use an advanced stochastic approach to quantify parameter uncertainty and model sensitivity of a coupled numerical framework for urban land-atmosphere interactions. It is found that the development of urban boundary layer is highly sensitive to surface characteristics of built terrains. Changes of both urban land use and geometry impose significant impact on the overlying urban boundary layer dynamics through modification on bottom boundary conditions, i.e., by altering surface energy partitioning and surface aerodynamic resistance, respectively. Hydrothermal properties of conventional and green roofs have different impacts on atmospheric dynamics due to different surface energy partitioning mechanisms. Urban geometry (represented by the canyon aspect ratio), however, has a significant nonlinear impact on boundary layer structure and temperature. Besides, managing rooftop roughness provides an alternative option to change the boundary layer thermal state through modification of the vertical turbulent transport. The sensitivity analysis deepens our insight into the fundamental physics of urban land-atmosphere interactions and provides useful guidance for urban planning under challenges of changing climate and continuous global urbanization.
Stochastic mapping of the Michaelis-Menten mechanism.
Dóka, Éva; Lente, Gábor
2012-02-07
The Michaelis-Menten mechanism is an extremely important tool for understanding enzyme-catalyzed transformation of substrates into final products. In this work, a computationally viable, full stochastic description of the Michaelis-Menten kinetic scheme is introduced based on a stochastic equivalent of the steady-state assumption. The full solution derived is free of restrictions on amounts of substance or parameter values and is used to create stochastic maps of the Michaelis-Menten mechanism, which show the regions in the parameter space of the scheme where the use of the stochastic kinetic approach is inevitable. The stochastic aspects of recently published examples of single-enzyme kinetic studies are analyzed using these maps.
A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type
Hosking, John Joseph Absalom
2012-12-15
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.
NASA Astrophysics Data System (ADS)
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
NASA Astrophysics Data System (ADS)
Xu, Hongyi; Zhu, Min; Marcicki, James; Yang, Xiao Guang
2017-03-01
A microstructure-based modeling method is developed to predict the mechanical behaviors of lithium-ion battery separators. Existing battery separator modeling methods cannot capture the structural features on the microscale. To overcome this issue, we propose an image-based microstructure Representative Volume Element (RVE) modeling method, which facilitates the understanding of the separators' complex macro mechanical behaviors from the perspective of microstructural features. A generic image processing workflow is developed to identify different phases in the microscopic image. The processed RVE image supplies microstructural information to the Finite Element Analysis (FEA). Both mechanical behavior and microstructure evolution are obtained from the simulation. The evolution of microstructure features is quantified using the stochastic microstructure characterization methods. The proposed method successfully captures the anisotropic behavior of the separator under tensile test, and provides insights into the microstructure deformation, such as the growth of voids. We apply the proposed method to a commercially available separator as the demonstration. The analysis results are validated using experimental testing results that are reported in literature.
Adaptive stochastic resonance method for impact signal detection based on sliding window
NASA Astrophysics Data System (ADS)
Li, Jimeng; Chen, Xuefeng; He, Zhengjia
2013-04-01
Aiming at solving the existing sharp problems in impact signal detection by using stochastic resonance (SR) in the fault diagnosis of rotating machinery, such as the measurement index selection of SR and the detection of impact signal with different impact amplitudes, the present study proposes an adaptive SR method for impact signal detection based on sliding window by analyzing the SR characteristics of impact signal. This method can not only achieve the optimal selection of system parameters by means of weighted kurtosis index constructed through using kurtosis index and correlation coefficient, but also achieve the detection of weak impact signal through the algorithm of data segmentation based on sliding window, even though the differences between different impact amplitudes are great. The algorithm flow of adaptive SR method is given and effectiveness of the method has been verified by the contrastive results between the proposed method and the traditional SR method of simulation experiments. Finally, the proposed method has been applied to a gearbox fault diagnosis in a hot strip finishing mill in which two local faults located on the pinion are obtained successfully. Therefore, it can be concluded that the proposed method is of great practical value in engineering.
A Stochastic Approach To Human Health Risk Assessment Due To Groundwater Contamination
NASA Astrophysics Data System (ADS)
de Barros, F. P.; Rubin, Y.
2006-12-01
We present a probabilistic framework to addressing adverse human health effects due to groundwater contamination. One of the main challenges in health risk assessment is in relating it to subsurface data acquisition and to improvement in our understanding of human physiological responses to contamination. In this paper we propose to investigate this problem through an approach that integrates flow, transport and human health risk models with hydrogeological characterization. A human health risk cumulative distribution function is analytically developed to account for both uncertainty and variability in hydrogeological as well as human physiological parameters. With our proposed approach, we investigate under which conditions the reduction of uncertainties from flow physics, human physiology and exposure related parameters might contribute to a better understanding of human health risk assessment. Results indicate that the human health risk cumulative distribution function is sensitive to physiological parameters at low risk values associated with longer travel times. The results show that the worth of hydrogeological characterization in human health risk is dependent on the residence time of the contaminant plume in the aquifer and on the exposure duration of the population to certain chemicals.
Value of Geographic Diversity of Wind and Solar: Stochastic Geometry Approach; Preprint
Diakov, V.
2012-08-01
Based on the available geographically dispersed data for the continental U.S. (excluding Alaska), we analyze to what extent the geographic diversity of these resources can offset their variability. A geometric model provides a convenient measure for resource variability, shows the synergy between wind and solar resources.
Luo, B; Li, J B; Huang, G H; Li, H L
2006-05-15
This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural non-point source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and "off-site" water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties.
Stochastic dual-plane on-axis digital holography based on Mach-Zehnder interferometer
NASA Astrophysics Data System (ADS)
Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie
2016-09-01
For traditional dual-plane on-axis digital holography, the robustness is lower because it is difficult to maintain the stability of the phase difference between the object beam and the reference beam, and it may be invalid when the objects are on the surface of a medium with uneven thickness. An improved dual-plane digital holographic method based on Mach-Zehnder interferometer is presented to address these problems. Two holograms are recorded at two different planes separated by a small distance. Then, the zero-order image and conjugated image are eliminated by Fourier domain processing. In order to enhance the robustness of the system, the object is illuminated by a stochastic beam that is a speckle wave produced by a diffuser. Simulated and experimental results are shown to demonstrate that the proposed method has greater robustness than the traditional dual-plane on-axis digital holography and it can be used to imaging on the irregular surface of a transparent medium.
Upper bound for the average entropy production based on stochastic entropy extrema
NASA Astrophysics Data System (ADS)
Limkumnerd, Surachate
2017-03-01
The second law of thermodynamics, which asserts the non-negativity of the average total entropy production of a combined system and its environment, is a direct consequence of applying Jensen's inequality to a fluctuation relation. It is also possible, through this inequality, to determine an upper bound of the average total entropy production based on the entropies along the most extreme stochastic trajectories. In this work, we construct an upper bound inequality of the average of a convex function over a domain whose average is known. When applied to the various fluctuation relations, the upper bounds of the average total entropy production are established. Finally, by employing the result of Neri, Roldán, and Jülicher [Phys. Rev. X 7, 011019 (2017)], 10.1103/PhysRevX.7.011019, we are able to show that the average total entropy production is bounded only by the total entropy production supremum, and vice versa, for a general nonequilibrium stationary system.
1988-01-01
Two central features of polymorphonuclear leukocyte chemosensory movement behavior demand fundamental theoretical understanding. In uniform concentrations of chemoattractant, these cells exhibit a persistent random walk, with a characteristic "persistence time" between significant changes in direction. In chemoattractant concentration gradients, they demonstrate a biased random walk, with an "orientation bias" characterizing the fraction of cells moving up the gradient. A coherent picture of cell movement responses to chemoattractant requires that both the persistence time and the orientation bias be explained within a unifying framework. In this paper, we offer the possibility that "noise" in the cellular signal perception/response mechanism can simultaneously account for these two key phenomena. In particular, we develop a stochastic mathematical model for cell locomotion based on kinetic fluctuations in chemoattractant/receptor binding. This model can simulate cell paths similar to those observed experimentally, under conditions of uniform chemoattractant concentrations as well as chemoattractant concentration gradients. Furthermore, this model can quantitatively predict both cell persistence time and dependence of orientation bias on gradient size. Thus, the concept of signal "noise" can quantitatively unify the major characteristics of leukocyte random motility and chemotaxis. The same level of noise large enough to account for the observed frequency of turning in uniform environments is simultaneously small enough to allow for the observed degree of directional bias in gradients. PMID:3339093
Strategy improvement for concurrent reachability and turn-based stochastic safety games.
Chatterjee, Krishnendu; de Alfaro, Luca; Henzinger, Thomas A
2013-08-01
We consider concurrent games played on graphs. At every round of a game, each player simultaneously and independently selects a move; the moves jointly determine the transition to a successor state. Two basic objectives are the safety objective to stay forever in a given set of states, and its dual, the reachability objective to reach a given set of states. First, we present a simple proof of the fact that in concurrent reachability games, for all [Formula: see text], memoryless ε-optimal strategies exist. A memoryless strategy is independent of the history of plays, and an ε-optimal strategy achieves the objective with probability within ε of the value of the game. In contrast to previous proofs of this fact, our proof is more elementary and more combinatorial. Second, we present a strategy-improvement (a.k.a. policy-iteration) algorithm for concurrent games with reachability objectives. Finally, we present a strategy-improvement algorithm for turn-based stochastic games (where each player selects moves in turns) with safety objectives. Our algorithms yield sequences of player-1 strategies which ensure probabilities of winning that converge monotonically (from below) to the value of the game.
Throughput assurance of wireless body area networks coexistence based on stochastic geometry
Wang, Yinglong; Shu, Minglei; Wu, Shangbin
2017-01-01
Wireless body area networks (WBANs) are expected to influence the traditional medical model by assisting caretakers with health telemonitoring. Within WBANs, the transmit power of the nodes should be as small as possible owing to their limited energy capacity but should be sufficiently large to guarantee the quality of the signal at the receiving nodes. When multiple WBANs coexist in a small area, the communication reliability and overall throughput can be seriously affected due to resource competition and interference. We show that the total network throughput largely depends on the WBANs distribution density (λp), transmit power of their nodes (Pt), and their carrier-sensing threshold (γ). Using stochastic geometry, a joint carrier-sensing threshold and power control strategy is proposed to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard. Given different network distributions and carrier-sensing thresholds, the proposed strategy derives a minimum transmit power according to varying surrounding environment. We obtain expressions for transmission success probability and throughput adopting this strategy. Using numerical examples, we show that joint carrier-sensing thresholds and transmit power strategy can effectively improve the overall system throughput and reduce interference. Additionally, this paper studies the effects of a guard zone on the throughput using a Matern hard-core point process (HCPP) type II model. Theoretical analysis and simulation results show that the HCPP model can increase the success probability and throughput of networks. PMID:28141841
Task fMRI data analysis based on supervised stochastic coordinate coding.
Lv, Jinglei; Lin, Binbin; Li, Qingyang; Zhang, Wei; Zhao, Yu; Jiang, Xi; Guo, Lei; Han, Junwei; Hu, Xintao; Guo, Christine; Ye, Jieping; Liu, Tianming
2017-02-20
Task functional magnetic resonance imaging (fMRI) has been widely employed for brain activation detection and brain network analysis. Modeling rich information from spatially-organized collection of fMRI time series is challenging because of the intrinsic complexity. Hypothesis-driven methods, such as the general linear model (GLM), which regress exterior stimulus from voxel-wise functional brain activity, are limited due to overlooking the complexity of brain activities and the diversity of concurrent brain networks. Recently, sparse representation and dictionary learning methods have attracted increasing interests in task fMRI data analysis. The major advantage of this methodology is its promise in reconstructing concurrent brain networks systematically. However, this data-driven strategy is, to some extent, arbitrary and does not sufficiently utilize the prior information of task design and neuroscience knowledge. To bridge this gap, we here propose a novel supervised sparse representation and dictionary learning framework based on stochastic coordinate coding (SCC) algorithm for task fMRI data analysis, in which certain brain networks are learned with known information such as pre-defined temporal patterns and spatial network patterns, and at the same time other networks are learned automatically from data. Our proposed method has been applied to two independent task fMRI datasets, and qualitative and quantitative evaluations have shown that our method provides a new and effective framework for task fMRI data analysis.
NASA Astrophysics Data System (ADS)
Wächtler, Christopher W.; Strasberg, Philipp; Brandes, Tobias
2016-11-01
In the derivation of fluctuation relations, and in stochastic thermodynamics in general, it is tacitly assumed that we can measure the system perfectly, i.e., without measurement errors. We here demonstrate for a driven system immersed in a single heat bath, for which the classic Jarzynski equality < {{{e}}}-β (W-{{Δ }F)}> =1 holds, how to relax this assumption. Based on a general measurement model akin to Bayesian inference we derive a general expression for the fluctuation relation of the measured work and we study the case of an overdamped Brownian particle and of a two-level system in particular. We then generalize our results further and incorporate feedback in our description. We show and argue that, if measurement errors are fully taken into account by the agent who controls and observes the system, the standard Jarzynski-Sagawa-Ueda relation should be formulated differently. We again explicitly demonstrate this for an overdamped Brownian particle and a two-level system where the fluctuation relation of the measured work differs significantly from the efficacy parameter introduced by Sagawa and Ueda. Instead, the generalized fluctuation relation under feedback control, < {{{e}}}-β (W-{{Δ }F)-I}> =1, holds only for a superobserver having perfect access to both the system and detector degrees of freedom, independently of whether or not the detector yields a noisy measurement record and whether or not we perform feedback.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Lu, Siliang; Liu, Fang; Liu, Yongbin; Li, Guihua; Zhao, Jiwen
2017-03-01
Stochastic resonance (SR), which is characterized by the fact that proper noise can be utilized to enhance weak periodic signals, has been widely applied in weak signal detection. SR is a nonlinear parameterized filter, and the output signal relies on the system parameters for the deterministic input signal. The most commonly used index for parameter tuning in the SR procedure is the signal-to-noise ratio (SNR). However, using the SNR index to evaluate the denoising effect of SR quantitatively is insufficient when the target signal frequency cannot be estimated accurately. To address this issue, six different indexes, namely, power spectral kurtosis of the SR output signal, correlation coefficient between the SR output and the original signal, peak SNR, structural similarity, root mean square error, and smoothness, are constructed in this study to measure the SR output quantitatively. These six quantitative indexes are fused into a new synthetic quantitative index (SQI) via a back propagation neural network to guide the adaptive parameter selection of the SR procedure. The index fusion procedure reduces the instability of each index and thus improves the robustness of parameter tuning. In addition, genetic algorithm is utilized to quickly select the optimal SR parameters. The efficiency of bearing fault diagnosis is thus further improved. The effectiveness and efficiency of the proposed SQI-based adaptive SR method for bearing fault diagnosis are verified through numerical and experiment analyses.
Throughput assurance of wireless body area networks coexistence based on stochastic geometry.
Liu, Ruixia; Wang, Yinglong; Shu, Minglei; Wu, Shangbin
2017-01-01
Wireless body area networks (WBANs) are expected to influence the traditional medical model by assisting caretakers with health telemonitoring. Within WBANs, the transmit power of the nodes should be as small as possible owing to their limited energy capacity but should be sufficiently large to guarantee the quality of the signal at the receiving nodes. When multiple WBANs coexist in a small area, the communication reliability and overall throughput can be seriously affected due to resource competition and interference. We show that the total network throughput largely depends on the WBANs distribution density (λp), transmit power of their nodes (Pt), and their carrier-sensing threshold (γ). Using stochastic geometry, a joint carrier-sensing threshold and power control strategy is proposed to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard. Given different network distributions and carrier-sensing thresholds, the proposed strategy derives a minimum transmit power according to varying surrounding environment. We obtain expressions for transmission success probability and throughput adopting this strategy. Using numerical examples, we show that joint carrier-sensing thresholds and transmit power strategy can effectively improve the overall system throughput and reduce interference. Additionally, this paper studies the effects of a guard zone on the throughput using a Matern hard-core point process (HCPP) type II model. Theoretical analysis and simulation results show that the HCPP model can increase the success probability and throughput of networks.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton-Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
Eigenvalue density of linear stochastic dynamical systems: A random matrix approach
NASA Astrophysics Data System (ADS)
Adhikari, S.; Pastur, L.; Lytova, A.; Du Bois, J.
2012-02-01
Eigenvalue problems play an important role in the dynamic analysis of engineering systems modeled using the theory of linear structural mechanics. When uncertainties are considered, the eigenvalue problem becomes a random eigenvalue problem. In this paper the density of the eigenvalues of a discretized continuous system with uncertainty is discussed by considering the model where the system matrices are the Wishart random matrices. An analytical expression involving the Stieltjes transform is derived for the density of the eigenvalues when the dimension of the corresponding random matrix becomes asymptotically large. The mean matrices and the dispersion parameters associated with the mass and stiffness matrices are necessary to obtain the density of the eigenvalues in the frameworks of the proposed approach. The applicability of a simple eigenvalue density function, known as the Marenko-Pastur (MP) density, is investigated. The analytical results are demonstrated by numerical examples involving a plate and the tail boom of a helicopter with uncertain properties. The new results are validated using an experiment on a vibrating plate with randomly attached spring-mass oscillators where 100 nominally identical samples are physically created and individually tested within a laboratory framework.
NASA Astrophysics Data System (ADS)
Ross, Steven M.
A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is presented, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed recursively using Radau pseudospectral collocation methods and sequential quadratic programming for the optimal path and an Unscented Kalman Filter for the target position estimate. The underlying real-time optimal control (RTOC) algorithm is developed, specifically addressing limitations of current techniques that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. System integration, variable timing methods, and discontinuity management techniques are provided for actual hardware implementation. Validation is accomplished with both simulation and flight test, autonomously landing a quadrotor helicopter on a wire.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Existence Theory for Stochastic Power Law Fluids
NASA Astrophysics Data System (ADS)
Breit, Dominic
2015-06-01
We consider the equations of motion for an incompressible non-Newtonian fluid in a bounded Lipschitz domain during the time interval (0, T) together with a stochastic perturbation driven by a Brownian motion W. The balance of momentum reads as where v is the velocity, the pressure and f an external volume force. We assume the common power law model and show the existence of martingale weak solution provided . Our approach is based on the -truncation and a harmonic pressure decomposition which are adapted to the stochastic setting.
A novel approach to phylogenetic tree construction using stochastic optimization and clustering
Qin, Ling; Chen, Yixin; Pan, Yi; Chen, Ling
2006-01-01
Background The problem of inferring the evolutionary history and constructing the phylogenetic tree with high performance has become one of the major problems in computational biology. Results A new phylogenetic tree construction method from a given set of objects (proteins, species, etc.) is presented. As an extension of ant colony optimization, this method proposes an adaptive phylogenetic clustering algorithm based on a digraph to find a tree structure that defines the ancestral relationships among the given objects. Conclusion Our phylogenetic tree construction method is tested to compare its results with that of the genetic algorithm (GA). Experimental results show that our algorithm converges much faster and also achieves higher quality than GA. PMID:17217517
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
Zheng, Ying; Wong, David Shan-Hill; Wang, Yan-Wei; Fang, Huajing
2014-07-01
In many batch-based industrial manufacturing processes, feedback run-to-run control is used to improve production quality. However, measurements may be expensive and cannot always be performed online. Thus, the measurement delay always exists. The metrology delay will affect the stability and performance of the process. Moreover, since quality measurements are performed offline, delay is not fixed but is stochastic in nature. In this paper, a modeling approach Takagi-Sugeno (T-S) model is presented to handle stochastic metrology delay in both single-product and mixed-product processes. Based on the Markov characteristics of the delay, the membership of the T-S model is derived. Performance indices such as the mean and the variance of the closed-loop output of the exponentially weighted moving average (EWMA) control algorithm can be derived. A steady-state error of the process output always exists, which leads the output deviating from the target. To remove the steady-state error, an algorithm called compensatory EWMA run-to-run (COM-EWMA-RtR) algorithm is proposed. The validity of the T-S model analysis and the efficiency of the proposed COM-EWMA-RtR algorithm are confirmed by simulation.
Solan, Eilon; Vieille, Nicolas
2015-01-01
In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883
Enhanced detection of rolling element bearing fault based on stochastic resonance
NASA Astrophysics Data System (ADS)
Zhang, Xiaofei; Hu, Niaoqing; Cheng, Zhe; Hu, Lei
2012-11-01
Early bearing faults can generate a series of weak impacts. All the influence factors in measurement may degrade the vibration signal. Currently, bearing fault enhanced detection method based on stochastic resonance(SR) is implemented by expensive computation and demands high sampling rate, which requires high quality software and hardware for fault diagnosis. In order to extract bearing characteristic frequencies component, SR normalized scale transform procedures are presented and a circuit module is designed based on parameter-tuning bistable SR. In the simulation test, discrete and analog sinusoidal signals under heavy noise are enhanced by SR normalized scale transform and circuit module respectively. Two bearing fault enhanced detection strategies are proposed. One is realized by pure computation with normalized scale transform for sampled vibration signal, and the other is carried out by designed SR hardware with circuit module for analog vibration signal directly. The first strategy is flexible for discrete signal processing, and the second strategy demands much lower sampling frequency and less computational cost. The application results of the two strategies on bearing inner race fault detection of a test rig show that the local signal to noise ratio of the characteristic components obtained by the proposed methods are enhanced by about 50% compared with the band pass envelope analysis for the bearing with weaker fault. In addition, helicopter transmission bearing fault detection validates the effectiveness of the enhanced detection strategy with hardware. The combination of SR normalized scale transform and circuit module can meet the need of different application fields or conditions, thus providing a practical scheme for enhanced detection of bearing fault.
NASA Astrophysics Data System (ADS)
Al-Azzawi, Waleed; Al-Akaidi, Marwan
2015-04-01
In this paper, the robust stability analysis of solar wireless networked control systems (SWNCSs) with stochastic time delays and packet dropout is investigated. The robust model predictive controller (RMPC) technique for the SWNCS is discussed using the linear matrix inequality (LMI) technique. Based on the SWNCS model, the RMPC (a full state feedback controller) can be constructed by using the Lyapunov functional method. Both sensor-to-controller and controller-to-actuator time delays of the SWNCS are considered as stochastic variables controlled by a Markov chain. A discrete-time Markovian jump linear system (MJLS) with norm bounded time delay is presented to model the SWNCSs. Conditions for H∞-norm is used to evaluate stability and stabilization of the fundamental systems derived via LMIs formulation. Finally, an illustrative numerical example is given to demonstrate the effectiveness of the proposed techniques.
Simulation of quantum dynamics based on the quantum stochastic differential equation.
Li, Ming
2013-01-01
The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm.
Simulation of Quantum Dynamics Based on the Quantum Stochastic Differential Equation
2013-01-01
The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm. PMID:23781156
NASA Astrophysics Data System (ADS)
Reynders, Edwin; Roeck, Guido De
2008-04-01
The modal analysis of mechanical or civil engineering structures consists of three steps: data collection, system identification and modal parameter estimation. The system identification step plays a crucial role in the quality of the modal parameters, that are derived from the identified system model, as well as in the number of modal parameters that can be determined. This explains the increasing interest in sophisticated system identification methods for both experimental and operational modal analysis. In purely operational or output-only modal analysis, absolute scaling of the obtained mode shapes is not possible and the frequency content of the ambient forces could be narrow banded so that only a limited number of modes are obtained. This drives the demand for system identification methods that take both artificial and ambient excitation into account so that the amplitude of the artificial excitation can be small compared to that of the ambient excitation. An accurate, robust and efficient system identification method that meets this requirements is combined deterministic-stochastic subspace identification. It can be used both for experimental modal analysis and for operational modal analysis with deterministic inputs. In this paper, the method is generalized to a reference-based version which is faster and, if the chosen reference outputs have the highest SNR values, more accurate than the classical algorithm. The algorithm is validated with experimental data from the Z24 bridge that overpassing the A1 highway between Bern and Zurich in Switzerland, that have been proposed as a benchmark for the assessment of system identification methods for the modal analysis of large structures. With the presented algorithm, the most complete set of modes reported so far is obtained.
Boedicker, J.; Li, L; Kline, T; Ismagilov, R
2008-01-01
This article describes plug-based microfluidic technology that enables rapid detection and drug susceptibility screening of bacteria in samples, including complex biological matrices, without pre-incubation. Unlike conventional bacterial culture and detection methods, which rely on incubation of a sample to increase the concentration of bacteria to detectable levels, this method confines individual bacteria into droplets nanoliters in volume. When single cells are confined into plugs of small volume such that the loading is less than one bacterium per plug, the detection time is proportional to plug volume. Confinement increases cell density and allows released molecules to accumulate around the cell, eliminating the pre-incubation step and reducing the time required to detect the bacteria. We refer to this approach as stochastic confinement. Using the microfluidic hybrid method, this technology was used to determine the antibiogram - or chart of antibiotic sensitivity - of methicillin-resistant Staphylococcus aureus (MRSA) to many antibiotics in a single experiment and to measure the minimal inhibitory concentration (MIC) of the drug cefoxitin (CFX) against this strain. In addition, this technology was used to distinguish between sensitive and resistant strains of S. aureus in samples of human blood plasma. High-throughput microfluidic techniques combined with single-cell measurements also enable multiple tests to be performed simultaneously on a single sample containing bacteria. This technology may provide a method of rapid and effective patient-specific treatment of bacterial infections and could be extended to a variety of applications that require multiple functional tests of bacterial samples on reduced timescales.
Boedicker, James Q; Li, Liang; Kline, Timothy R; Ismagilov, Rustem F
2008-08-01
This article describes plug-based microfluidic technology that enables rapid detection and drug susceptibility screening of bacteria in samples, including complex biological matrices, without pre-incubation. Unlike conventional bacterial culture and detection methods, which rely on incubation of a sample to increase the concentration of bacteria to detectable levels, this method confines individual bacteria into droplets nanoliters in volume. When single cells are confined into plugs of small volume such that the loading is less than one bacterium per plug, the detection time is proportional to plug volume. Confinement increases cell density and allows released molecules to accumulate around the cell, eliminating the pre-incubation step and reducing the time required to detect the bacteria. We refer to this approach as 'stochastic confinement'. Using the microfluidic hybrid method, this technology was used to determine the antibiogram - or chart of antibiotic sensitivity - of methicillin-resistant Staphylococcus aureus (MRSA) to many antibiotics in a single experiment and to measure the minimal inhibitory concentration (MIC) of the drug cefoxitin (CFX) against this strain. In addition, this technology was used to distinguish between sensitive and resistant strains of S. aureus in samples of human blood plasma. High-throughput microfluidic techniques combined with single-cell measurements also enable multiple tests to be performed simultaneously on a single sample containing bacteria. This technology may provide a method of rapid and effective patient-specific treatment of bacterial infections and could be extended to a variety of applications that require multiple functional tests of bacterial samples on reduced timescales.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
NASA Astrophysics Data System (ADS)
Pivovarov, Dmytro; Steinmann, Paul
2016-12-01
In the current work we apply the stochastic version of the FEM to the homogenization of magneto-elastic heterogeneous materials with random microstructure. The main aim of this study is to capture accurately the discontinuities appearing at matrix-inclusion interfaces. We demonstrate and compare three different techniques proposed in the literature for the purely mechanical problem, i.e. global, local and enriched stochastic basis functions. Moreover, we demonstrate the implementation of the isoparametric concept in the enlarged physical-stochastic product space. The Gauss integration rule in this multidimensional space is discussed. In order to design a realistic stochastic Representative Volume Element we analyze actual scans obtained by electron microscopy and provide numerical studies of the micro particle distribution. The SFEM framework described in our previous work (Pivovarov and Steinmann in Comput Mech 57(1): 123-147, 2016) is extended to the case of the magneto-elastic materials. To this end, the magneto-elastic energy function is used, and the corresponding hyper-tensors of the magneto-elastic problem are introduced. In order to estimate the methods' accuracy we performed a set of simulations for elastic and magneto-elastic problems using three different SFEM modifications. All results are compared with "brute-force" Monte-Carlo simulations used as reference solution.
Materiality in a Practice-Based Approach
ERIC Educational Resources Information Center
Svabo, Connie
2009-01-01
Purpose: The paper aims to provide an overview of the vocabulary for materiality which is used by practice-based approaches to organizational knowing. Design/methodology/approach: The overview is theoretically generated and is based on the anthology Knowing in Organizations: A Practice-based Approach edited by Nicolini, Gherardi and Yanow. The…
Wang, Xugao; Wiegand, Thorsten; Kraft, Nathan J B; Swenson, Nathan G; Davies, Stuart J; Hao, Zhanqing; Howe, Robert; Lin, Yiching; Ma, Keping; Mi, Xiangcheng; Su, Sheng-Hsin; Sun, I-fang; Wolf, Amy
2016-02-01
Recent theory predicts that stochastic dilution effects may result in species-rich communities with statistically independent species spatial distributions, even if the underlying ecological processes structuring the community are driven by deterministic niche differences. Stochastic dilution is a consequence of the stochastic geometry of biodiversity where the identities of the nearest neighbors of individuals of a given species are largely unpredictable. Under such circumstances, the outcome of deterministic species interactions may vary greatly among individuals of a given species. Consequently, nonrandom patterns in the biotic neighborhoods of species, which might be expected from coexistence or community assembly theory (e.g., individuals of a given species are neighbored by phylogenetically similar species), are weakened or do not emerge, resulting in statistical independence of species spatial distributions. We used data on phylogenetic and functional similarity of tree species in five large forest dynamics plots located across a gradient of species richness to test predictions of the stochastic dilution hypothesis. To quantify the biotic neighborhood of a focal species we used the mean phylogenetic (or functional) dissimilarity of the individuals of the focal species to all species within a local neighborhood. We then compared the biotic neighborhood of species to predictions from stochastic null models to test if a focal species was surrounded by more or less similar species than expected by chance. The proportions of focal species that showed spatial independence with respect to their biotic neighborhoods increased with total species richness. Locally dominant, high-abundance species were more likely to be surrounded by species that were statistically more similar or more dissimilar than expected by chance. Our results suggest that stochasticity may play a stronger role in shaping the spatial structure of species rich tropical forest communities than it
2014-01-01
Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations. PMID:24939084
NASA Technical Reports Server (NTRS)
Hsia, Wei-Shen
1987-01-01
A stochastic control model of the NASA/MSFC Ground Facility for Large Space Structures (LSS) control verification through Maximum Entropy (ME) principle adopted in Hyland's method was presented. Using ORACLS, a computer program was implemented for this purpose. Four models were then tested and the results presented.
NASA Astrophysics Data System (ADS)
D'Amico, Sebastiano
2011-12-01
The evaluation of the expected peak ground motion caused by an earthquake is an important problem in earthquake seismology. It is particularly important for regions where strong-motion data are lacking. With the approach presented in this study of using data from small earthquakes, it is possible to extrapolate the peak motion parameters beyond the magnitude range of the weak-motion data set on which they are calculated. To provide a description of the high frequency attenuation and ground motion parameters in southern Italy we used seismic recordings coming from two different projects: the SAPTEX (Southern Apennines Tomography Experiment) and the CAT/SCAN (Calabria Apennine Tyrrhenian - Subduction Collision Accretion Network). We used about 10,000 records with magnitudes between M=2.5 and M=4.7. Using regression model with the large number of weak-motion data, the regional propagation and the absolute source scaling were determined. To properly calibrate the source scaling it was necessary to compute moment magnitudes of several events in the data set. We computed the moment tensor solutions using the "Cut And Paste" and the SLUMT methods. Both methods determine the source depth, moment magnitude and focal mechanisms using a grid search technique. The methods provide quality solutions in the area in a magnitude range (2.5-4.5) that has been too small to be included in the Italian national earthquake catalogues. The derived database of focal mechanisms allowed us to better detail the transitional area in the Messina Strait between the extensional domain related to subduction trench retreat (southern Calabria) and the compressional one associated with continental collision (central-western Sicily). Stochastic simulations are generated for finite-fault ruptures using the derived propagation parameters to predict the absolute peaks of the ground acceleration for several faults, magnitude, and distance range, as well as beyond the magnitude range of the weak
A stochastic model for the polygonal tundra based on Poisson-Voronoi Diagrams
NASA Astrophysics Data System (ADS)
Cresto Aleina, F.; Brovkin, V.; Muster, S.; Boike, J.; Kutzbach, L.; Sachs, T.; Zuyev, S.
2012-12-01
Sub-grid and small scale processes occur in various ecosystems and landscapes (e.g., periglacial ecosystems, peatlands and vegetation patterns). These local heterogeneities are often important or even fundamental to better understand general and large scale properties of the system, but they are either ignored or poorly parameterized in regional and global models. Because of their small scale, the underlying generating processes can be well explained and resolved only by local mechanistic models, which, on the other hand, fail to consider the regional or global influences of those features. A challenging problem is then how to deal with these interactions across different spatial scales, and how to improve our understanding of the role played by local soil heterogeneities in the climate system. This is of particular interest in the northern peatlands, because of the huge amount of carbon stored in these regions. Land-atmosphere greenhouse gas fluxes vary dramatically within these environments. Therefore, to correctly estimate the fluxes a description of the small scale soil variability is needed. Applications of statistical physics methods could be useful tools to upscale local features of the landscape, relating them to large-scale properties. To test this approach we considered a case study: the polygonal tundra. Cryogenic polygons, consisting mainly of elevated dry rims and wet low centers, pattern the terrain of many subartic regions and are generated by complex crack-and-growth processes. Methane, carbon dioxide and water vapor fluxes vary largely within the environment, as an effect of the small scale processes that characterize the landscape. It is then essential to consider the local heterogeneous behavior of the system components, such as the water table level inside the polygon wet centers, or the depth at which frozen soil thaws. We developed a stochastic model for this environment using Poisson-Voronoi diagrams, which is able to upscale statistical
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
An optimal local active noise control method based on stochastic finite element models
NASA Astrophysics Data System (ADS)
Airaksinen, T.; Toivanen, J.
2013-12-01
A new method is presented to obtain a local active noise control that is optimal in stochastic environment. The method uses numerical acoustical modeling that is performed in the frequency domain by using a sequence of finite element discretizations of the Helmholtz equation. The stochasticity of domain geometry and primary noise source is considered. Reference signals from an array of microphones are mapped to secondary loudspeakers, by an off-line optimized linear mapping. The frequency dependent linear mapping is optimized to minimize the expected value of error in a quiet zone, which is approximated by the numerical model and can be interpreted as a stochastic virtual microphone. A least squares formulation leads to a quadratic optimization problem. The presented active noise control method gives robust and efficient noise attenuation, which is demonstrated by a numerical study in a passenger car cabin. The numerical results demonstrate that a significant, stable local noise attenuation of 20-32 dB can be obtained at lower frequencies (<500 Hz) by two microphones, and 8-36 dB attenuation at frequencies up to 1000 Hz, when 8 microphones are used.
Stochastic demographic forecasting.
Lee, R D
1992-11-01
"This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary.... Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented."
Garcia-Gomez, Juan Miguel; Benedi, Jose Miguel; Vicente, Javier; Robles, Montserrat
2005-01-01
In this paper, a new method for modelling tRNA secondary structures is presented. This method is based on the combination of stochastic context-free grammars (SCFG) and Hidden Markov Models (HMM). HMM are used to capture the local relations in the loops of the molecule (nonstructured regions) and SCFG are used to capture the long term relations between nucleotides of the arms (structured regions). Given annotated public databases, the HMM and SCFG models are learned by means of automatic inductive learning methods. Two SCFG learning methods have been explored. Both of them take advantage of the structural information associated with the training sequences: one of them is based on a stochastic version of the Sakakibara algorithm and the other one is based on a Corpus based algorithm. A final model is then obtained by merging of the HMM of the nonstructured regions and the SCFG of the structured regions. Finally, the performed experiments on the tRNA sequence corpus and the non-tRNA sequence corpus give significant results. Comparative experiments with another published method are also presented.
NASA Astrophysics Data System (ADS)
Shioya, Tsubasa; Fujimoto, Yasutaka
In this paper, we introduce a simulator for ice thermal storage systems. Basically, the refrigeration system is modeled as a linear discrete-time system. For system identifications, the least square method is used. However, it is difficult to identify the switching time of the electromagnetic valve of brine pipes attached at showcases accurately by this method. In order to overcome this difficulty, a simulator based on the stochastic switched ARX model is developed. The data obtained from the simulator are compared with actual data. We verify the effectiveness of the proposed simulator.
Stochastic Prognostics for Rolling Element Bearings
NASA Astrophysics Data System (ADS)
Li, Y.; Kurfess, T. R.; Liang, S. Y.
2000-09-01
The capability to accurately predict the remaining life of a rolling element bearing is prerequisite to the optimal maintenance of rotating machinery performance in terms of cost and productivity. Due to the probabilistic nature of bearing integrity and operation condition, reliable estimation of a bearing's remaining life presents a challenging aspect in the area of maintenance optimisation and catastrophic failure avoidance. Previous study has developed an adaptive prognostic methodology to estimate the rate of bearing defect growth based on a deterministic defect-propagation model. However, deterministic models are inadequate in addressing the stochastic nature of defect-propagation. In this paper, a stochastic defect-propagation model is established by instituting a lognormal random variable in a deterministic defect-propagation rate model. The resulting stochastic model is calibrated on-line by a recursive least-squares (RLS) approach without the requirement of a priori knowledge on bearing characteristics. An augmented stochastic differential equation vector is developed with the consideration of model uncertainties, parameter estimation errors, and diagnostic model inaccuracies. It involves two ordinary differential equations for the first and second moments of its random variables. Solving the two equations gives the mean path of defect propagation and its dispersion at any instance. This approach is suitable for on-line monitoring, remaining life prediction, and decision making for optimal maintenance scheduling. The methodology has been verified by numerical simulations and the experimental testing of bearing fatigue life.
Robust stochastic mine production scheduling
NASA Astrophysics Data System (ADS)
Kumral, Mustafa
2010-06-01
The production scheduling of open pit mines aims to determine the extraction sequence of blocks such that the net present value (NPV) of a mining project is maximized under capacity and access constraints. This sequencing has significant effect on the profitability of the mining venture. However, given that the values of coefficients in the optimization procedure are obtained in a medium of sparse data and unknown future events, implementations based on deterministic models may lead to destructive consequences to the company. In this article, a robust stochastic optimization (RSO) approach is used to deal with mine production scheduling in a manner such that the solution is insensitive to changes in input data. The approach seeks a trade off between optimality and feasibility. The model is demonstrated on a case study. The findings showed that the approach can be used in mine production scheduling problems efficiently.
Stochastic approximation boosting for incomplete data problems.
Sexton, Joseph; Laake, Petter
2009-12-01
Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.
Sotero, Roberto C; Shmuel, Amir
2012-06-01
Several studies posit energy as a constraint on the coding and processing of information in the brain due to the high cost of resting and evoked cortical activity. This suggestion has been addressed theoretically with models of a single neuron and two coupled neurons. Neural mass models (NMMs) address mean-field based modeling of the activity and interactions between populations of neurons rather than a few neurons. NMMs have been widely employed for studying the generation of EEG rhythms, and more recently as frameworks for integrated models of neurophysiology and functional MRI (fMRI) responses. To date, the consequences of energy constraints on the activity and interactions of ensembles of neurons have not been addressed. Here we aim to study the impact of constraining energy consumption during the resting-state on NMM parameters. To this end, we first linearized the model, then used stochastic control theory by introducing a quadratic cost function, which transforms the NMM into a stochastic linear quadratic regulator (LQR). Solving the LQR problem introduces a regime in which the NMM parameters, specifically the effective connectivities between neuronal populations, must vary with time. This is in contrast to current NMMs, which assume a constant parameter set for a given condition or task. We further simulated energy-constrained stochastic control of a specific NMM, the Wilson and Cowan model of two coupled neuronal populations, one of which is excitatory and the other inhibitory. These simulations demonstrate that with varying weights of the energy-cost function, the NMM parameters show different time-varying behavior. We conclude that constraining NMMs according to energy consumption may create more realistic models. We further propose to employ linear NMMs with time-varying parameters as an alternative to traditional nonlinear NMMs with constant parameters.
Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.
2012-04-15
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.
NASA Astrophysics Data System (ADS)
Sapin, J. R.; Saito, L.; Rajagopalan, B.; Caldwell, R. J.
2013-12-01
Preservation of the Chinook salmon fishery on the Sacramento River in California has been a major concern since the winter-run Chinook was listed as threatened in 1989. The construction of Shasta Dam and Reservoir in 1945 prevented the salmon from reaching their native cold-water spawning habitat, resulting in severe population declines. The temperature control device (TCD) installed at Shasta Dam in 1997 provides increased capabilities of supplying cold-water habitat downstream of the dam to stimulate salmon spawning. However, increased air temperatures due to climate change could make it more difficult to meet downstream temperature targets with the TCD. By coupling stochastic hydroclimatology generation with two-dimensional hydrodynamic modeling of the reservoir we can simulate TCD operations under extreme climate conditions. This is accomplished by stochastically generating climate and inflow scenarios (created with historical data from NOAA, USGS and USBR) as input into a CE-QUAL-W2 model of the reservoir that can simulate TCD operations. Simulations will investigate if selective withdrawal from multiple gates of the TCD are capable of meeting temperature targets downstream of the dam under extreme hydroclimatic conditions. Moreover, our non-parametric methods for stochastically generating climate and inflow scenarios are capable of producing statistically representative years of extreme wet or extreme dry conditions beyond what is seen in the historical record. This allows us to simulate TCD operations for unprecedented hydroclimatic conditions with implications for climate changes in the watershed. Preliminary results of temperature outputs from simulations of TCD operations under extreme climate conditions with CE-QUAL-W2 will be presented. The conditions chosen for simulation are grounded to real-world managerial concerns by utilizing collaborative workshops with reservoir managers to establish which hydroclimatic scenarios would be of most concern for
Universal fuzzy integral sliding-mode controllers for stochastic nonlinear systems.
Gao, Qing; Liu, Lu; Feng, Gang; Wang, Yong
2014-12-01
In this paper, the universal integral sliding-mode controller problem for the general stochastic nonlinear systems modeled by Itô type stochastic differential equations is investigated. One of the main contributions is that a novel dynamic integral sliding mode control (DISMC) scheme is developed for stochastic nonlinear systems based on their stochastic T-S fuzzy approximation models. The key advantage of the proposed DISMC scheme is that two very restrictive assumptions in most existing ISMC approaches to stochastic fuzzy systems have been removed. Based on the stochastic Lyapunov theory, it is shown that the closed-loop control system trajectories are kept on the integral sliding surface almost surely since the initial time, and moreover, the stochastic stability of the sliding motion can be guaranteed in terms of linear matrix inequalities. Another main contribution is that the results of universal fuzzy integral sliding-mode controllers for two classes of stochastic nonlinear systems, along with constructive procedures to obtain the universal fuzzy integral sliding-mode controllers, are provided, respectively. Simulation results from an inverted pendulum example are presented to illustrate the advantages and effectiveness of the proposed approaches.
Modeling of chemotactic steering of bacteria-based microrobot using a population-scale approach
Cho, Sunghoon; Choi, Young Jin; Zheng, Shaohui; Han, Jiwon; Ko, Seong Young; Park, Jong-Oh; Park, Sukho
2015-01-01
The bacteria-based microrobot (Bacteriobot) is one of the most effective vehicles for drug delivery systems. The bacteriobot consists of a microbead containing therapeutic drugs and bacteria as a sensor and an actuator that can target and guide the bacteriobot to its destination. Many researchers are developing bacteria-based microrobots and establishing the model. In spite of these efforts, a motility model for bacteriobots steered by chemotaxis remains elusive. Because bacterial movement is random and should be described using a stochastic model, bacterial response to the chemo-attractant is difficult to anticipate. In this research, we used a population-scale approach to overcome the main obstacle to the stochastic motion of single bacterium. Also known as Keller-Segel's equation in chemotaxis research, the population-scale approach is not new. It is a well-designed model derived from transport theory and adaptable to any chemotaxis experiment. In addition, we have considered the self-propelled Brownian motion of the bacteriobot in order to represent its stochastic properties. From this perspective, we have proposed a new numerical modelling method combining chemotaxis and Brownian motion to create a bacteriobot model steered by chemotaxis. To obtain modeling parameters, we executed motility analyses of microbeads and bacteriobots without chemotactic steering as well as chemotactic steering analysis of the bacteriobots. The resulting proposed model shows sound agreement with experimental data with a confidence level <0.01. PMID:26487902
Modeling of chemotactic steering of bacteria-based microrobot using a population-scale approach.
Cho, Sunghoon; Choi, Young Jin; Zheng, Shaohui; Han, Jiwon; Ko, Seong Young; Park, Jong-Oh; Park, Sukho
2015-09-01
The bacteria-based microrobot (Bacteriobot) is one of the most effective vehicles for drug delivery systems. The bacteriobot consists of a microbead containing therapeutic drugs and bacteria as a sensor and an actuator that can target and guide the bacteriobot to its destination. Many researchers are developing bacteria-based microrobots and establishing the model. In spite of these efforts, a motility model for bacteriobots steered by chemotaxis remains elusive. Because bacterial movement is random and should be described using a stochastic model, bacterial response to the chemo-attractant is difficult to anticipate. In this research, we used a population-scale approach to overcome the main obstacle to the stochastic motion of single bacterium. Also known as Keller-Segel's equation in chemotaxis research, the population-scale approach is not new. It is a well-designed model derived from transport theory and adaptable to any chemotaxis experiment. In addition, we have considered the self-propelled Brownian motion of the bacteriobot in order to represent its stochastic properties. From this perspective, we have proposed a new numerical modelling method combining chemotaxis and Brownian motion to create a bacteriobot model steered by chemotaxis. To obtain modeling parameters, we executed motility analyses of microbeads and bacteriobots without chemotactic steering as well as chemotactic steering analysis of the bacteriobots. The resulting proposed model shows sound agreement with experimental data with a confidence level <0.01.
Yang, Xinsong; Cao, Jinde; Qiu, Jianlong
2015-05-01
This paper concerns the pth moment synchronization in an array of generally coupled memristor-based neural networks with time-varying discrete delays, unbounded distributed delays, as well as stochastic perturbations. Hybrid controllers are designed to cope with the uncertainties caused by the state-dependent parameters: (a) state feedback controllers combined with delayed impulsive controller; (b) adaptive controller combined with delayed impulsive controller. Based on an impulsive differential inequality, the properties of random variables, the framework of Filippov solution, and Lyapunov functional method, sufficient conditions are derived to guarantee that the considered coupled memristor-based neural networks can be pth moment globally exponentially synchronized onto an isolated node under both of the two classes of hybrid impulsive controllers. Finally, numerical simulations are given to show the effectiveness of the theoretical results.
NASA Astrophysics Data System (ADS)
Shirata, Kento; Inden, Yuki; Kasai, Seiya; Oya, Takahide; Hagiwara, Yosuke; Kaeriyama, Shunichi; Nakamura, Hideyuki
2016-04-01
We investigated the robust detection of surface electromyogram (EMG) signals based on the stochastic resonance (SR) phenomenon, in which the response to weak signals is optimized by adding noise, combined with multiple surface electrodes. Flexible carbon nanotube composite paper (CNT-cp) was applied to the surface electrode, which showed good performance that is comparable to that of conventional Ag/AgCl electrodes. The SR-based EMG signal system integrating an 8-Schmitt-trigger network and the multiple-CNT-cp-electrode array successfully detected weak EMG signals even when the subject’s body is in the motion, which was difficult to achieve using the conventional technique. The feasibility of the SR-based EMG detection technique was confirmed by demonstrating its applicability to robot hand control.
On impulsive integrated pest management models with stochastic effects
Akman, Olcay; Comar, Timothy D.; Hrozencik, Daniel
2015-01-01
We extend existing impulsive differential equation models for integrated pest management (IPM) by including stage structure for both predator and prey as well as by adding stochastic elements in the birth rate of the prey. Based on our model, we propose an approach that incorporates various competing stochastic components. This approach enables us to select a model with optimally determined weights for maximum accuracy and precision in parameter estimation. This is significant in the case of IPM because the proposed model accommodates varying unknown environmental and climatic conditions, which affect the resources needed for pest eradication. PMID:25954144
On impulsive integrated pest management models with stochastic effects.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2015-01-01
We extend existing impulsive differential equation models for integrated pest management (IPM) by including stage structure for both predator and prey as well as by adding stochastic elements in the birth rate of the prey. Based on our model, we propose an approach that incorporates various competing stochastic components. This approach enables us to select a model with optimally determined weights for maximum accuracy and precision in parameter estimation. This is significant in the case of IPM because the proposed model accommodates varying unknown environmental and climatic conditions, which affect the resources needed for pest eradication.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
Stochastic averaging based on generalized harmonic functions for energy harvesting systems
NASA Astrophysics Data System (ADS)
Jiang, Wen-An; Chen, Li-Qun
2016-09-01
A stochastic averaging method is proposed for nonlinear vibration energy harvesters subject to Gaussian white noise excitation. The generalized harmonic transformation scheme is applied to decouple the electromechanical equations, and then obtained an equivalent nonlinear system which is uncoupled to an electric circuit. The frequency function is given through the equivalent potential energy which is independent of the total energy. The stochastic averaging method is developed by using the generalized harmonic functions. The averaged Itô equations are derived via the proposed procedure, and the Fokker-Planck-Kolmogorov (FPK) equations of the decoupled system are established. The exact stationary solution of the averaged FPK equation is used to determine the probability densities of the amplitude and the power of the stationary response. The procedure is applied to three different type Duffing vibration energy harvesters under Gaussian white excitations. The effects of the system parameters on the mean-square voltage and the output power are examined. It is demonstrated that quadratic nonlinearity only and quadratic combined with properly cubic nonlinearities can increase the mean-square voltage and the output power, respectively. The approximate analytical outcomes are qualitatively and quantitatively supported by the Monte Carlo simulations.
[Orbitozygomatic approaches to the skull base].
Cherekaev, V A; Gol'bin, D A; Belov, A I; Radchenkov, N S; Lasunin, N V; Vinokurov, A G
2015-01-01
The paper is written in the lecture format and dedicated to one of the main basal approaches, the orbitozygomatic approach, that has been widely used by neurosurgeons for several decades. The authors describe the historical background of the approach development and the surgical technique features and also analyze the published data about application of the orbitozygomatic approach in surgery for skull base tumors and cerebral aneurysms.
Levin, Pavel; Lefebvre, Jérémie; Perkins, Theodore J
2012-12-07
Many biomolecular systems depend on orderly sequences of chemical transformations or reactions. Yet, the dynamics of single molecules or small-copy-number molecular systems are significantly stochastic. Here, we propose state sequence analysis--a new approach for predicting or visualizing the behaviour of stochastic molecular systems by computing maximum probability state sequences, based on initial conditions or boundary conditions. We demonstrate this approach by analysing the acquisition of drug-resistance mutations in the human immunodeficiency virus genome, which depends on rare events occurring on the time scale of years, and the stochastic opening and closing behaviour of a single sodium ion channel, which occurs on the time scale of milliseconds. In both cases, we find that our approach yields novel insights into the stochastic dynamical behaviour of these systems, including insights that are not correctly reproduced in standard time-discretization approaches to trajectory analysis.
Lux, Slawomir A; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin
2016-01-01
The paper reports application of a Markov-like stochastic process agent-based model and a "virtual farm" concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a "bottom-up ethological" approach and emulates behavior of the "primary IPM actors"-large cohorts of individual insects-within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the "virtual farm" approach-were discussed.
NASA Astrophysics Data System (ADS)
Goda, Katsuichiro; Yasuda, Tomohiro; Mori, Nobuhito; Mai, P. Martin
2015-06-01
The sensitivity and variability of spatial tsunami inundation footprints in coastal cities and towns due to a megathrust subduction earthquake in the Tohoku region of Japan are investigated by considering different fault geometry and slip distributions. Stochastic tsunami scenarios are generated based on the spectral analysis and synthesis method with regards to an inverted source model. To assess spatial inundation processes accurately, tsunami modeling is conducted using bathymetry and elevation data with 50 m grid resolutions. Using the developed methodology for assessing variability of tsunami hazard estimates, stochastic inundation depth maps can be generated for local coastal communities. These maps are important for improving disaster preparedness by understanding the consequences of different situations/conditions, and by communicating uncertainty associated with hazard predictions. The analysis indicates that the sensitivity of inundation areas to the geometrical parameters (i.e., top-edge depth, strike, and dip) depends on the tsunami source characteristics and the site location, and is therefore complex and highly nonlinear. The variability assessment of inundation footprints indicates significant influence of slip distributions. In particular, topographical features of the region, such as ria coast and near-shore plain, have major influence on the tsunami inundation footprints.
Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.
Sormaz, Milos; Stamm, Tobias; Jenny, Patrick
2010-05-01
This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo.
Jingyi, Zhu
2015-01-01
The detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance (MSR) model was studied in this paper. A numerically stimulating model based on MSR was established. And gas-ionizing experiment by adding electronic white noise to induce 1.65 MHz periodic component in the carbon nanotubes gas sensor was performed. It was found that the signal-to-noise ratio (SNR) spectrum displayed 2 maximal values, which accorded to the change of the broken-line potential function. The experimental results of gas-ionizing experiment demonstrated that periodic component of 1.65 MHz had multiple MSR phenomena, which was in accordance with the numerical stimulation results. In this way, the numerical stimulation method provides an innovative method for the detecting mechanism research of carbon nanotubes gas sensor.
NASA Astrophysics Data System (ADS)
Liu, Zhiyuan; Meng, Qiang
2014-05-01
This paper focuses on modelling the network flow equilibrium problem on a multimodal transport network with bus-based park-and-ride (P&R) system and congestion pricing charges. The multimodal network has three travel modes: auto mode, transit mode and P&R mode. A continuously distributed value-of-time is assumed to convert toll charges and transit fares to time unit, and the users' route choice behaviour is assumed to follow the probit-based stochastic user equilibrium principle with elastic demand. These two assumptions have caused randomness to the users' generalised travel times on the multimodal network. A comprehensive network framework is first defined for the flow equilibrium problem with consideration of interactions between auto flows and transit (bus) flows. Then, a fixed-point model with unique solution is proposed for the equilibrium flows, which can be solved by a convergent cost averaging method. Finally, the proposed methodology is tested by a network example.
Hipp, John R; Wang, Cheng; Butts, Carter T; Jose, Rupa; Lakon, Cynthia M
2015-05-01
Although stochastic actor based models (e.g., as implemented in the SIENA software program) are growing in popularity as a technique for estimating longitudinal network data, a relatively understudied issue is the consequence of missing network data for longitudinal analysis. We explore this issue in our research note by utilizing data from four schools in an existing dataset (the AddHealth dataset) over three time points, assessing the substantive consequences of using four different strategies for addressing missing network data. The results indicate that whereas some measures in such models are estimated relatively robustly regardless of the strategy chosen for addressing missing network data, some of the substantive conclusions will differ based on the missing data strategy chosen. These results have important implications for this burgeoning applied research area, implying that researchers should more carefully consider how they address missing data when estimating such models.
Koh, Wonryull; Blackwell, Kim T.
2011-01-01
Stochastic simulation of reaction–diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies. PMID:21513371
Relative risk estimation for malaria disease mapping based on stochastic SIR-SI model in Malaysia
NASA Astrophysics Data System (ADS)
Samat, Nor Azah; Ma'arof, Syafiqah Husna Mohd Imam
2016-10-01
Disease mapping is a study on the geographical distribution of a disease to represent the epidemiology data spatially. The production of maps is important to identify areas that deserve closer scrutiny or more attention. In this study, a mosquito-borne disease called Malaria is the focus of our application. Malaria disease is caused by parasites of the genus Plasmodium and is transmitted to people through the bites of infected female Anopheles mosquitoes. Precautionary steps need to be considered in order to avoid the malaria virus from spreading around the world, especially in the tropical and subtropical countries, which would subsequently increase the number of Malaria cases. Thus, the purpose of this paper is to discuss a stochastic model employed to estimate the relative risk of malaria disease in Malaysia. The outcomes of the analysis include a Malaria risk map for all 16 states in Malaysia, revealing the high and low risk areas of Malaria occurrences.
Variance-based sensitivity indices for stochastic models with correlated inputs
Kala, Zdeněk
2015-03-10
The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.
Stochastic extension of cellular manufacturing systems: a queuing-based analysis
NASA Astrophysics Data System (ADS)
Fardis, Fatemeh; Zandi, Afagh; Ghezavati, Vahidreza
2013-07-01
Clustering parts and machines into part families and machine cells is a major decision in the design of cellular manufacturing systems which is defined as cell formation. This paper presents a non-linear mixed integer programming model to design cellular manufacturing systems which assumes that the arrival rate of parts into cells and machine service rate are stochastic parameters and described by exponential distribution. Uncertain situations may create a queue behind each machine; therefore, we will consider the average waiting time of parts behind each machine in order to have an efficient system. The objective function will minimize summation of idleness cost of machines, sub-contracting cost for exceptional parts, non-utilizing machine cost, and holding cost of parts in the cells. Finally, the linearized model will be solved by the Cplex solver of GAMS, and sensitivity analysis will be performed to illustrate the effectiveness of the parameters.
NASA Astrophysics Data System (ADS)
Zalys-Geller, E.; Hatridge, M.; Silveri, M.; Narla, A.; Sliwa, K. M.; Shankar, S.; Girvin, S. M.; Devoret, M. H.
2015-03-01
Remote entanglement of two superconducting qubits may be accomplished by first entangling them with flying coherent microwave pulses, and then erasing the which-path information of these pulses by using a non-degenerate parametric amplifier such as the Josephson Parametric Converter (JPC). Crucially, this process requires no direct interaction between the two qubits. The JPC, however, will fail to completely erase the which-path information if the flying microwave pulses encode any difference in dynamics of the two qubit-cavity systems. This which-path information can easily arise from mismatches in the cavity linewidths and the cavity dispersive shifts from their respective qubits. Through analysis of the Stochastic Master Equation for this system, we have found a strategy for shaping the measurement pulses to eliminate the effect of these mismatches on the entangling measurement. We have then confirmed the effectiveness of this strategy by numerical simulation. Work supported by: IARPA, ARO, and NSF.
A biophysically based neural model of matching law behavior: melioration by stochastic synapses.
Soltani, Alireza; Wang, Xiao-Jing
2006-04-05
In experiments designed to uncover the neural basis of adaptive decision making in a foraging environment, neuroscientists have reported single-cell activities in the lateral intraparietal cortex (LIP) that are correlated with choice options and their subjective values. To investigate the underlying synaptic mechanism, we considered a spiking neuron model of decision making endowed with synaptic plasticity that follows a reward-dependent stochastic Hebbian learning rule. This general model is tested in a matching task in which rewards on two targets are scheduled randomly with different rates. Our main results are threefold. First, we show that plastic synapses provide a natural way to integrate past rewards and estimate the local (in time) "return" of a choice. Second, our model reproduces the matching behavior (i.e., the proportional allocation of choices matches the relative reinforcement obtained on those choices, which is achieved through melioration in individual trials). Our model also explains the observed "undermatching" phenomenon and points to biophysical constraints (such as finite learning rate and stochastic neuronal firing) that set the limits to matching behavior. Third, although our decision model is an attractor network exhibiting winner-take-all competition, it captures graded neural spiking activities observed in LIP, when the latter were sorted according to the choices and the difference in the returns for the two targets. These results suggest that neurons in LIP are involved in selecting the oculomotor responses, whereas rewards are integrated and stored elsewhere, possibly by plastic synapses and in the form of the return rather than income of choice options.
Hossain, Md Kamrul; Kamil, Anton Abdulbasah; Baten, Md Azizul; Mustafa, Adli
2012-01-01
The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989-2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation.
Stochastic Simulation Tool for Aerospace Structural Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F.; Moore, David F.
2006-01-01
Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.
Hadjilouka, Agni; Mantzourani, Kyriaki-Sofia; Katsarou, Anastasia; Cavaiuolo, Marina; Ferrante, Antonio; Paramithiotis, Spiros; Mataragas, Marios; Drosinos, Eleftherios H
2015-02-01
The aims of the present study were to determine the prevalence and levels of Listeria monocytogenes and Escherichia coli O157:H7 in rocket and cucumber samples by deterministic (estimation of a single value) and stochastic (estimation of a range of values) approaches. In parallel, the chromogenic media commonly used for the recovery of these microorganisms were evaluated and compared, and the efficiency of an enzyme-linked immunosorbent assay (ELISA)-based protocol was validated. L. monocytogenes and E. coli O157:H7 were detected and enumerated using agar Listeria according to Ottaviani and Agosti plus RAPID' L. mono medium and Fluorocult plus sorbitol MacConkey medium with cefixime and tellurite in parallel, respectively. Identity was confirmed with biochemical and molecular tests and the ELISA. Performance indices of the media and the prevalence of both pathogens were estimated using Bayesian inference. In rocket, prevalence of both L. monocytogenes and E. coli O157:H7 was estimated at 7% (7 of 100 samples). In cucumber, prevalence was 6% (6 of 100 samples) and 3% (3 of 100 samples) for L. monocytogenes and E. coli O157:H7, respectively. The levels derived from the presence-absence data using Bayesian modeling were estimated at 0.12 CFU/25 g (0.06 to 0.20) and 0.09 CFU/25 g (0.04 to 0.170) for L. monocytogenes in rocket and cucumber samples, respectively. The corresponding values for E. coli O157:H7 were 0.59 CFU/25 g (0.43 to 0.78) and 1.78 CFU/25 g (1.38 to 2.24), respectively. The sensitivity and specificity of the culture media differed for rocket and cucumber samples. The ELISA technique had a high level of cross-reactivity. Parallel testing with at least two culture media was required to achieve a reliable result for L. monocytogenes or E. coli O157:H7 prevalence in rocket and cucumber samples.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
Debris-flow risk analysis in a managed torrent based on a stochastic life-cycle performance.
Ballesteros Cánovas, J A; Stoffel, M; Corona, C; Schraml, K; Gobiet, A; Tani, S; Sinabell, F; Fuchs, S; Kaitna, R
2016-07-01
Two key factors can affect the functional ability of protection structures in mountains torrents, namely (i) infrastructure maintenance of existing infrastructures (as a majority of existing works is in the second half of their life cycle), and (ii) changes in debris-flow activity as a result of ongoing and expected future climatic changes. Here, we explore the applicability of a stochastic life-cycle performance to assess debris-flow risk in the heavily managed Wartschenbach torrent (Lienz region, Austria) and to quantify associated, expected economic losses. We do so by considering maintenance costs to restore infrastructure in the aftermath of debris-flow events as well as by assessing the probability of check dam failure (e.g., as a result of overload). Our analysis comprises two different management strategies as well as three scenarios defining future changes in debris-flow activity resulting from climatic changes. At the study site, an average debris-flow frequency of 21 events per decade was observed for the period 1950-2000; activity at the site is projected to change by +38% to -33%, according to the climate scenario used. Comparison of the different management alternatives suggests that the current mitigation strategy will allow to reduce expected damage to infrastructure and population almost fully (89%). However, to guarantee a comparable level of safety, maintenance costs is expected to increase by 57-63%, with an increase of maintenance costs by ca. 50% for each intervention. Our analysis therefore also highlights the importance of taking maintenance costs into account for risk assessments realized in managed torrent systems, as they result both from progressive and event-related deteriorations. We conclude that the stochastic life-cycle performance adopted in this study represents indeed an integrated approach to assess the long-term effects and costs of prevention structures in managed torrents.
Stochastic stage-structured modeling of the adaptive immune system
Chao, D. L.; Davenport, M. P.; Forrest, S.; Perelson, Alan S.,
2003-01-01
We have constructed a computer model of the cytotoxic T lymphocyte (CTL) response to antigen and the maintenance of immunological memory. Because immune responses often begin with small numbers of cells and there is great variation among individual immune systems, we have chosen to implement a stochastic model that captures the life cycle of T cells more faithfully than deterministic models. Past models of the immune response have been differential equation based, which do not capture stochastic effects, or agent-based, which are computationally expensive. We use a stochastic stage-structured approach that has many of the advantages of agent-based modeling but is more efficient. Our model can provide insights into the effect infections have on the CTL repertoire and the response to subsequent infections.
Aquifer Structure Identification Using Stochastic Inversion
Harp, Dylan R; Dai, Zhenxue; Wolfsberg, Andrew V; Vrugt, Jasper A
2008-01-01
This study presents a stochastic inverse method for aquifer structure identification using sparse geophysical and hydraulic response data. The method is based on updating structure parameters from a transition probability model to iteratively modify the aquifer structure and parameter zonation. The method is extended to the adaptive parameterization of facies hydraulic parameters by including these parameters as optimization variables. The stochastic nature of the statistical structure parameters leads to nonconvex objective functions. A multi-method genetically adaptive evolutionary approach (AMALGAM-SO) was selected to perform the inversion given its search capabilities. Results are obtained as a probabilistic assessment of facies distribution based on indicator cokriging simulation of the optimized structural parameters. The method is illustrated by estimating the structure and facies hydraulic parameters of a synthetic example with a transient hydraulic response.
Estimation of turbulent channel flow based on the wall measurement with a statistical approach
NASA Astrophysics Data System (ADS)
Hasegawa, Yosuke; Suzuki, Takao
2016-11-01
A turbulent channel flow at Ret au = 100 with periodic boundary conditions is estimated with linear stochastic estimation only based on the wall measurement, i.e. the shear-stress in the streamwise and spanwise directions as well as the pressure over the entire wavenumbers. The results reveal that instantaneous measurement on the wall governs the success of the estimation in y+ < 20. Degrees of agreement are equivalent to those reported by Chevalier et al. (2006) using a data-assimilation approach. This suggests that the instantaneous wall information dictates the estimation rather than the estimator solving the dynamical system. We feed the velocity components from the linear stochastic estimation via the body-force term into the Navier-Stokes system; however, the estimation slightly improves in the log layer, indicating some benefit of involving a dynamical system but over-suppression of turbulent kinetic energy beyond the viscous sublayer by the linear stochastic estimation. Motions inaccurately estimated in the buffer layer prevent from further reconstruction toward the centerline even if we relax the feedback forcing and let the flow evolve nonlinearly through the estimator. We also argue the inherent limitation of turbulent flow estimation based on the wall measurement.
The Tutor's Approach in Base Groups (PBL)
ERIC Educational Resources Information Center
Silen, Charlotte
2006-01-01
In this article, the concept of approach related to tutor functioning in problem-based learning (PBL) is explored and the significance of a phenomenological perspective of the body in relation to learning and tutoring is investigated. The aim has been to understand the concept of approach in a context where the individual, thoughts, emotions and…
Computer-Based Training: An Institutional Approach.
ERIC Educational Resources Information Center
Barker, Philip; Manji, Karim
1992-01-01
Discussion of issues related to computer-assisted learning (CAL) and computer-based training (CBT) describes approaches to electronic learning; principles underlying courseware development to support these approaches; and a plan for creation of a CAL/CBT development center, including its functional role, campus services, staffing, and equipment…
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Brissette, François P.; Poulin, Annie; Côté, Pascal; Martel, Jean-Luc
2014-05-01
The process of hydrological model parameter calibration is routinely performed with the help of stochastic optimization algorithms. Many such algorithms have been created and they sometimes provide varying levels of performance (as measured by an efficiency metric such as Nash-Sutcliffe). This is because each algorithm is better suited for one type of optimization problem rather than another. This research project's aim was twofold. First, it was sought upon to find various features in the calibration problem fitness landscapes to map the encountered problem types to the best possible optimization algorithm. Second, the optimal number of model evaluations in order to minimize resources usage and maximize overall model quality was investigated. A total of five stochastic optimization algorithms (SCE-UA, CMAES, DDS, PSO and ASA) were used to calibrate four lumped hydrological models (GR4J, HSAMI, HMETS and MOHYSE) on 421 basins from the US MOPEX database. Each of these combinations was performed using three objective functions (Log(RMSE), NSE, and a metric combining NSE, RMSE and BIAS) to add sufficient diversity to the fitness landscapes. Each run was performed 30 times for statistical analysis. With every parameter set tested during the calibration process, the validation value was taken on a separate period. It was then possible to outline the calibration skill versus the validation skill for the different algorithms. Fitness landscapes were characterized by various metrics, such as the dispersion metric, the mean distance between random points and their respective local minima (found through simple hill-climbing algorithms) and the mean distance between the local minima and the best local optimum found. These metrics were then compared to the calibration score of the various optimization algorithms. Preliminary results tend to show that fitness landscapes presenting a globally convergent structure are more prevalent than other types of landscapes in this
NASA Astrophysics Data System (ADS)
Gao, Leitao; Zhao, Guangshe; Li, Guoqi; Yang, Zhaoxu
2017-03-01
The leader selection problem refers to determining a predefined number of agents as leaders in order to minimize the mean-square deviation from consensus in stochastically forced networks. The original leader selection problem is formulated as a non-convex optimization problem where matrix variables are involved. By relaxing the constraints, a convex optimization model can be obtained. By introducing a chain rule of matrix differentiation, we can obtain the gradient of the cost function which consists matrix variables. We develop a "revisited projected gradient method" (RPGM) and a "probabilistic projected gradient method" (PPGM) to solve the two formulated convex and non-convex optimization problems, respectively. The convergence property of both methods is established. For convex optimization model, the global optimal solution can be achieved by RPGM, while for the original non-convex optimization model, a suboptimal solution is achieved by PPGM. Simulation results ranging from the synthetic to real-life networks are provided to show the effectiveness of RPGM and PPGM. This works will deepen the understanding of leader selection problems and enable applications in various real-life distributed control problems.
A Nonlinear Dynamical Systems based Model for Stochastic Simulation of Streamflow
NASA Astrophysics Data System (ADS)
Erkyihun, S. T.; Rajagopalan, B.; Zagona, E. A.
2014-12-01
Traditional time series methods model the evolution of the underlying process as a linear or nonlinear function of the autocorrelation. These methods capture the distributional statistics but are incapable of providing insights into the dynamics of the process, the potential regimes, and predictability. This work develops a nonlinear dynamical model for stochastic simulation of streamflows. In this, first a wavelet spectral analysis is employed on the flow series to isolate dominant orthogonal quasi periodic timeseries components. The periodic bands are added denoting the 'signal' component of the time series and the residual being the 'noise' component. Next, the underlying nonlinear dynamics of this combined band time series is recovered. For this the univariate time series is embedded in a d-dimensional space with an appropriate lag T to recover the state space in which the dynamics unfolds. Predictability is assessed by quantifying the divergence of trajectories in the state space with time, as Lyapunov exponents. The nonlinear dynamics in conjunction with a K-nearest neighbor time resampling is used to simulate the combined band, to which the noise component is added to simulate the timeseries. We demonstrate this method by applying it to the data at Lees Ferry that comprises of both the paleo reconstructed and naturalized historic annual flow spanning 1490-2010. We identify interesting dynamics of the signal in the flow series and epochal behavior of predictability. These will be of immense use for water resources planning and management.
Hui, Guohua; Zhang, Jianfeng; Li, Jian; Zheng, Le
2016-04-15
Quantitative and qualitative determination of sucrose from complex tastant mixtures using Cu foam electrode was investigated in this study. Cu foam was prepared and its three-dimensional (3-D) mesh structure was characterized by scanning electron microscopy (SEM). Cu foam was utilized as working electrode in three-electrode electrochemical system. Cyclic voltammetry (CV) scanning results exhibited the oxidation procedure of sucrose on Cu foam electrode. Amperometric i-t scanning results indicated that Cu foam electrode selectively responded to sucrose from four tastant mixtures with low limit of detection (LOD) of 35.34 μM, 49.85 μM, 45.89 μM, and 26.81 μM, respectively. The existence of quinine, NaCl, citric acid (CA) and their mixtures had no effect on sucrose detection. Furthermore, mixtures containing different tastants could be discriminated by non-linear double-layered cascaded series stochastic resonance (DCSSR) output signal-to-noise ratio (SNR) eigen peak parameters of CV measurement data. The proposed method provides a promising way for sweetener analysis of commercial food.
Lux, Slawomir A.; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin
2016-01-01
The paper reports application of a Markov-like stochastic process agent-based model and a “virtual farm” concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a “bottom-up ethological” approach and emulates behavior of the “primary IPM actors”—large cohorts of individual insects—within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the “virtual farm” approach—were discussed. PMID:27602000
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-12-12
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
A consistent concept for high- and low-frequency dynamics based on stochastic modal analysis
NASA Astrophysics Data System (ADS)
Pradlwarter, H. J.; Schuëller, G. I.
2005-12-01
Accurate expressions for the kinetic energy in substructure excited by white noise and broad-band spectra, based on classical random vibration theory and modal analysis, are presented. The approach is accurate, general and valid for all frequency ranges, since no simplifying are needed to arrive at the presented power flow relations. Strong coupling, local energies and energies in substructures can be analyzed for uncorrelated as well as for correlated excitation. The results are compared with statistical energy analysis (SEA) which is applicable for the high-frequency range. It is shown, that the SEA representations is only suitable for very weak coupling between substructures, while an inverse representation does not show the observed limitations. Energies in substructures are not sensitive to variations of the eigenfrequencies or mode shapes due to the summation over frequency ranges and over the domain of the substructure. Hence, modal analysis will lead to accurate estimates even in case FE analysis fails to provide accurate eigenfrequencies and mode shapes, since the coupling between substructures is still represented with acceptable accuracy. Uncertain structural properties will affect the coupling between substructures and therefore the power flow. It is suggested to assess this influence by using Monte Carlo simulation.
Stochastic dynamics and chaos in the 3D Hindmarsh-Rose model
NASA Astrophysics Data System (ADS)
Ryashko, Lev; Bashkirtseva, Irina; Slepukhina, Evdokia; Fedotov, Sergei
2016-12-01
We study the effect of random disturbances on the three-dimensional Hindmarsh-Rose model of neural activity. In a parametric zone, where the only attractor of the system is a stable equilibrium, a stochastic generation of bursting oscillations is observed. For a sufficiently small noise, random states concentrate near the equilibrium. With an increase of the noise intensity, along with small-amplitude oscillations around the equilibrium, bursts are observed. The relationship of the noise-induced generation of bursts with system transitions from order to chaos is discussed. For a quantitative analysis of these stochastic phenomena, an approach based on the stochastic sensitivity function technique is suggested.
Jaffré, Malo; Le Galliard, Jean-François
2016-12-01
Integral projection models (IPM) make it possible to study populations structured by continuous traits. Recently, Vindenes et al. (Ecology 92:1146-1156, 2011) proposed an extended IPM to analyse the dynamics of small populations in stochastic environments, but this model has not yet been used to conduct population viability analyses. Here, we used the extended IPM to analyse the stochastic dynamics of IPM of small size-structured populations in one plant and one animal species (evening primrose and common lizard) including demographic stochasticity in both cases and environmental stochasticity in the lizard model. We also tested the accuracy of a diffusion approximation of the IPM for the two empirical systems. In both species, the elasticity for λ was higher with respect to parameters linked to body growth and size-dependent reproduction rather than survival. An analytical approach made it possible to quantify demographic and environmental variance to calculate the average stochastic growth rate. Demographic variance was further decomposed to gain insights into the most important size classes and demographic components. A diffusion approximation provided a remarkable fit to the stochastic dynamics and cumulative extinction risk, except for very small populations where stochastic growth rate was biased upward or downward depending on the model. These results confirm that the extended IPM provides a powerful tool to assess the conservation status and compare the stochastic demography of size-structured species, but should be complemented with individual based models to obtain unbiased estimates for very small populations of conservation concern.
Stochastic Phase Resetting: a Theory for Deep Brain Stimulation
NASA Astrophysics Data System (ADS)
Tass, Peter A.
2000-03-01
A stochastic approach to phase resetting in clusters of interacting oscillators is presented. This theory explains how a stimulus, especially a single pulse, induces synchronization and desynchronization processes. The theory is used to design a new technique for deep brain stimulation in patients suffering from Parkinson's disease or essential tremor that do no longer respond to drug therapy. This stimulation mode is a feedback controlled single pulse stimulation. The feedback signal is registered with the deep brain electrode, and the desynchronizing pulses are administered via the same electrode. The stochastic phase resetting theory is used as a starting point of a model based design of intelligent and gentle deep brain stimulation techniques.
Stochastic Satbility and Performance Robustness of Linear Multivariable Systems
NASA Technical Reports Server (NTRS)
Ryan, Laurie E.; Stengel, Robert F.
1990-01-01
Stochastic robustness, a simple technique used to estimate the robustness of linear, time invariant systems, is applied to a single-link robot arm control system. Concepts behind stochastic stability robustness are extended to systems with estimators and to stochastic performance robustness. Stochastic performance robustness measures based on classical design specifications are introduced, and the relationship between stochastic robustness measures and control system design parameters are discussed. The application of stochastic performance robustness, and the relationship between performance objectives and design parameters are demonstrated by means of example. The results prove stochastic robustness to be a good overall robustness analysis method that can relate robustness characteristics to control system design parameters.
Approaches to lunar base life support
NASA Technical Reports Server (NTRS)
Brown, M. F.; Edeen, M. A.
1990-01-01
Various approaches to reliable, low maintenance, low resupply regenerative long-term life support for lunar base application are discussed. The first approach utilizes Space Station Freedom physiochemical systems technology which has closed air and water loops with approximately 99 and 90 percent closure respectively, with minor subsystem changes to the SSF baseline improving the level of water resupply for the water loop. A second approach would be a physiochemical system, including a solid waste processing system and improved air and water loop closure, which would require only food and nitrogen for resupply. A hybrid biological/physiochemical life support system constitutes the third alternative, incorporating some level of food production via plant growth into the life support system. The approaches are described in terms of mass, power, and resupply requirements; and the potential evolution of a small, initial outpost to a large, self-sustaining base is discussed.
Guo, P.; Huang, G.H.
2010-03-15
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their
Basharov, A. M.
2012-09-15
It is shown that the effective Hamiltonian representation, as it is formulated in author's papers, serves as a basis for distinguishing, in a broadband environment of an open quantum system, independent noise sources that determine, in terms of the stationary quantum Wiener and Poisson processes in the Markov approximation, the effective Hamiltonian and the equation for the evolution operator of the open system and its environment. General stochastic differential equations of generalized Langevin (non-Wiener) type for the evolution operator and the kinetic equation for the density matrix of an open system are obtained, which allow one to analyze the dynamics of a wide class of localized open systems in the Markov approximation. The main distinctive features of the dynamics of open quantum systems described in this way are the stabilization of excited states with respect to collective processes and an additional frequency shift of the spectrum of the open system. As an illustration of the general approach developed, the photon dynamics in a single-mode cavity without losses on the mirrors is considered, which contains identical intracavity atoms coupled to the external vacuum electromagnetic field. For some atomic densities, the photons of the cavity mode are 'locked' inside the cavity, thus exhibiting a new phenomenon of radiation trapping and non-Wiener dynamics.
NASA Astrophysics Data System (ADS)
Zhong, Dongzhou; Luo, Wei; Xu, Geliang
2016-09-01
Using the dynamical properties of the polarization bistability that depends on the detuning of the injected light, we propose a novel approach to implement reliable all-optical stochastic logic gates in the cascaded vertical cavity surface emitting lasers (VCSELs) with optical-injection. Here, two logic inputs are encoded in the detuning of the injected light from a tunable CW laser. The logic outputs are decoded from the two orthogonal polarization lights emitted from the optically injected VCSELs. For the same logic inputs, under electro-optic modulation, we perform various digital signal processing (NOT, AND, NAND, XOR, XNOR, OR, NOR) in the all-optical domain by controlling the logic operation of the applied electric field. Also we explore their delay storages by using the mechanism of the generalized chaotic synchronization. To quantify the reliabilities of these logic gates, we further demonstrate their success probabilities. Project supported by the National Natural Science Foundation of China (Grant No. 61475120) and the Innovative Projects in Guangdong Colleges and Universities, China (Grant Nos. 2014KTSCX134 and 2015KTSCX146).
NASA Astrophysics Data System (ADS)
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-01-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses. PMID:27405788
Network motif identification in stochastic networks
NASA Astrophysics Data System (ADS)
Jiang, Rui; Tu, Zhidong; Chen, Ting; Sun, Fengzhu
2006-06-01
Network motifs have been identified in a wide range of networks across many scientific disciplines and are suggested to be the basic building blocks of most complex networks. Nonetheless, many networks come with intrinsic and/or experimental uncertainties and should be treated as stochastic networks. The building blocks in these networks thus may also have stochastic properties. In this article, we study stochastic network motifs derived from families of mutually similar but not necessarily identical patterns of interconnections. We establish a finite mixture model for stochastic networks and develop an expectation-maximization algorithm for identifying stochastic network motifs. We apply this approach to the transcriptional regulatory networks of Escherichia coli and Saccharomyces cerevisiae, as well as the protein-protein interaction networks of seven species, and identify several stochastic network motifs that are consistent with current biological knowledge. expectation-maximization algorithm | mixture model | transcriptional regulatory network | protein-protein interaction network
Blaskiewicz, M.
2011-01-01
Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.
NASA Astrophysics Data System (ADS)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Physics-based approach to haptic display
NASA Technical Reports Server (NTRS)
Brown, J. Michael; Colgate, J. Edward
1994-01-01
This paper addresses the implementation of complex multiple degree of freedom virtual environments for haptic display. We suggest that a physics based approach to rigid body simulation is appropriate for hand tool simulation, but that currently available simulation techniques are not sufficient to guarantee successful implementation. We discuss the desirable features of a virtual environment simulation, specifically highlighting the importance of stability guarantees.
Sensitivity of footbridge vibrations to stochastic walking parameters
NASA Astrophysics Data System (ADS)
Pedersen, Lars; Frier, Christian
2010-06-01
Some footbridges are so slender that pedestrian traffic can cause excessive vibrations and serviceability problems. Design guidelines outline procedures for vibration serviceability checks, but it is noticeable that they rely on the assumption that the action is deterministic, although in fact it is stochastic as different pedestrians generate different dynamic forces. For serviceability checks of footbridge designs it would seem reasonable to consider modelling the stochastic nature of the main parameters describing the excitation, such as for instance the load amplitude and the step frequency of the pedestrian. A stochastic modelling approach is adopted for this paper and it facilitates quantifying the probability of exceeding various vibration levels, which is useful in a discussion of serviceability of a footbridge design. However, estimates of statistical distributions of footbridge vibration levels to walking loads might be influenced by the models assumed for the parameters of the load model (the walking parameters). The paper explores how sensitive estimates of the statistical distribution of vertical footbridge response are to various stochastic assumptions for the walking parameters. The basis for the study is a literature review identifying different suggestions as to how the stochastic nature of these parameters may be modelled, and a parameter study examines how the different models influence estimates of the statistical distribution of footbridge vibrations. By neglecting scatter in some of the walking parameters, the significance of modelling the various walking parameters stochastically rather than deterministically is also investigated providing insight into which modelling efforts need to be made for arriving at reliable estimates of statistical distributions of footbridge vibrations. The studies for the paper are based on numerical simulations of footbridge responses and on the use of Monte Carlo simulations for modelling the stochastic nature of
Advanced Approach of Multiagent Based Buoy Communication
Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej
2015-01-01
Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Facial Translocation Approach to the Cranial Base
Arriaga, Moises A.; Janecka, Ivo P.
1991-01-01
Surgical exposure of the nasopharyngeal region of the cranial base is difficult because of its proximity to key anatomic structures. Our laboratory study outlines the anatomic basis for a new approach to this complex topography. Dissections were performed on eight cadaver halves and two fresh specimens injected with intravascular silicone rubber compound. By utilizing facial soft tissue translocation combined with craniofacial osteotomies; a wide surgical field can be obtained at the skull base. The accessible surgical field extends from the contralateral custachian tube to the ipsilateral geniculate ganglion, including the nasopharyax; clivus, sphonoid, and cavernous sinuses, the entire infratemporal fossa, and superior orbital fissure. The facial translocation approach offers previously unavailable wide and direct exposure, with a potential for immediate reconstruction, of this complex region of the cranial base. ImagesFigure 4Figure 5Figure 7Figure 8Figure 9 PMID:17170817
Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling
2016-12-13
Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment.
Tonini, Francesco; Hochmair, Hartwig H; Scheffrahn, Rudolf H; Deangelis, Donald L
2013-06-01
Invasive termites are destructive insect pests that cause billions of dollars in property damage every year. Termite species can be transported overseas by maritime vessels. However, only if the climatic conditions are suitable will the introduced species flourish. Models predicting the areas of infestation following initial introduction of an invasive species could help regulatory agencies develop successful early detection, quarantine, or eradication efforts. At present, no model has been developed to estimate the geographic spread of a termite infestation from a set of surveyed locations. In the current study, we used actual field data as a starting point, and relevant information on termite species to develop a spatially-explicit stochastic individual-based simulation to predict areas potentially infested by an invasive termite, Nasutitermes corniger (Motschulsky), in Dania Beach, FL. The Monte Carlo technique is used to assess outcome uncertainty. A set of model realizations describing potential areas of infestation were considered in a sensitivity analysis, which showed that the model results had greatest sensitivity to number of alates released from nest, alate survival, maximum pheromone attraction distance between heterosexual pairs, and mean flight distance. Results showed that the areas predicted as infested in all simulation runs of a baseline model cover the spatial extent of all locations recently discovered. The model presented in this study could be applied to any invasive termite species after proper calibration of parameters. The simulation herein can be used by regulatory authorities to define most probable quarantine and survey zones.
Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling
2016-01-01
Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment. PMID:27983585
NASA Astrophysics Data System (ADS)
Landrock, Clinton K.
Falls are the leading cause of all external injuries. Outcomes of falls include the leading cause of traumatic brain injury and bone fractures, and high direct medical costs in the billions of dollars. This work focused on developing three areas of enabling component technology to be used in postural control monitoring tools targeting the mitigation of falls. The first was an analysis tool based on stochastic fractal analysis to reliably measure levels of motor control. The second focus was on thin film wearable pressure sensors capable of relaying data for the first tool. The third was new thin film advanced optics for improving phototherapy devices targeting postural control disorders. Two populations, athletes and elderly, were studied against control groups. The results of these studies clearly show that monitoring postural stability in at-risk groups can be achieved reliably, and an integrated wearable system can be envisioned for both monitoring and treatment purposes. Keywords: electro-active polymer, ionic polymer-metal composite, postural control, motor control, fall prevention, sports medicine, fractal analysis, physiological signals, wearable sensors, phototherapy, photobiomodulation, nano-optics.
Veeraraghavan, Rengasayee; Gourdie, Robert G
2016-11-07
The spatial association between proteins is crucial to understanding how they function in biological systems. Colocalization analysis of fluorescence microscopy images is widely used to assess this. However, colocalization analysis performed on two-dimensional images with diffraction-limited resolution merely indicates that the proteins are within 200-300 nm of each other in the xy-plane and within 500-700 nm of each other along the z-axis. Here we demonstrate a novel three-dimensional quantitative analysis applicable to single-molecule positional data: stochastic optical reconstruction microscopy-based relative localization analysis (STORM-RLA). This method offers significant advantages: 1) STORM imaging affords 20-nm resolution in the xy-plane and <50 nm along the z-axis; 2) STORM-RLA provides a quantitative assessment of the frequency and degree of overlap between clusters of colabeled proteins; and 3) STORM-RLA also calculates the precise distances between both overlapping and nonoverlapping clusters in three dimensions. Thus STORM-RLA represents a significant advance in the high-throughput quantitative assessment of the spatial organization of proteins.
NASA Astrophysics Data System (ADS)
Wang, Wei; Liu, Wenqing; Zhang, Tianshu; Ren, Manyan
2013-03-01
The focus of the paper is application of an inverse-dispersion technique based on a backward Lagrangian stochastic (bLS) model in order to calculate gas-emission rates from industrial complexes. While the bLS technique is attractive for these types of sources, the bLS calculation must assume a spatial configuration for the source. Therefore, results are presented herein of numerical simulations designed to study the sensitivity of emissions calculations to the assumption of source configuration for complex industrial sources. We discuss how measurement fetch, concentration sensor height, and optical path length influence the accuracy of emission estimation. Through simulations, we identify an improved sensor configuration in order to reduce emission-calculation errors caused by an incorrect source-configuration assumption. It is concluded that, with respect to our defined source, the optimal measurement fetch may be between 200 m and 300 m; also, the ideal measurement height is probably between 2.0 m and 2.5 m. With choices within these two ranges, a path length of about 200 m is adequate, and greater path lengths, above 200 m, result in no substantial improvement in emission calculations.
Veeraraghavan, Rengasayee; Gourdie, Robert G.
2016-01-01
The spatial association between proteins is crucial to understanding how they function in biological systems. Colocalization analysis of fluorescence microscopy images is widely used to assess this. However, colocalization analysis performed on two-dimensional images with diffraction-limited resolution merely indicates that the proteins are within 200–300 nm of each other in the xy-plane and within 500–700 nm of each other along the z-axis. Here we demonstrate a novel three-dimensional quantitative analysis applicable to single-molecule positional data: stochastic optical reconstruction microscopy–based relative localization analysis (STORM-RLA). This method offers significant advantages: 1) STORM imaging affords 20-nm resolution in the xy-plane and <50 nm along the z-axis; 2) STORM-RLA provides a quantitative assessment of the frequency and degree of overlap between clusters of colabeled proteins; and 3) STORM-RLA also calculates the precise distances between both overlapping and nonoverlapping clusters in three dimensions. Thus STORM-RLA represents a significant advance in the high-throughput quantitative assessment of the spatial organization of proteins. PMID:27307586
Entity-based Stochastic Analysis of Search Results for Query Expansion and Results Re-Ranking
2015-11-20
introduce a method for exploiting entities from the emerging Web of Data for enhancing various In- formation Retrieval (IR) services. The approach is...low-ranked hits in higher positions. 1. INTRODUCTION The Web has now evolved to an information space where both unstructured documents (e.g. Web ...persons, locations, etc.) occur in all kinds of artifacts: Web pages, database cells, RDF triples, etc. A generic hypothesis that we investigate is
Stochastic analysis of transport in tubes with rough walls
Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu
2006-09-01
Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.
A network approach based on cliques
NASA Astrophysics Data System (ADS)
Fadigas, I. S.; Pereira, H. B. B.
2013-05-01
The characterization of complex networks is a procedure that is currently found in several research studies. Nevertheless, few studies present a discussion on networks in which the basic element is a clique. In this paper, we propose an approach based on a network of cliques. This approach consists not only of a set of new indices to capture the properties of a network of cliques but also of a method to characterize complex networks of cliques (i.e., some of the parameters are proposed to characterize the small-world phenomenon in networks of cliques). The results obtained are consistent with results from classical methods used to characterize complex networks.
Ertaş, Mehmet; Deviren, Bayram; Keskin, Mustafa
2012-11-01
Nonequilibrium magnetic properties in a two-dimensional kinetic mixed spin-2 and spin-5/2 Ising system in the presence of a time-varying (sinusoidal) magnetic field are studied within the effective-field theory (EFT) with correlations. The time evolution of the system is described by using Glauber-type stochastic dynamics. The dynamic EFT equations are derived by employing the Glauber transition rates for two interpenetrating square lattices. We investigate the time dependence of the magnetizations for different interaction parameter values in order to find the phases in the system. We also study the thermal behavior of the dynamic magnetizations, the hysteresis loop area, and dynamic correlation. The dynamic phase diagrams are presented in the reduced magnetic field amplitude and reduced temperature plane and we observe that the system exhibits dynamic tricritical and reentrant behaviors. Moreover, the system also displays a double critical end point (B), a zero-temperature critical point (Z), a critical end point (E), and a triple point (TP). We also performed a comparison with the mean-field prediction in order to point out the effects of correlations and found that some of the dynamic first-order phase lines, which are artifacts of the mean-field approach, disappeared.
Stochastic Vorticity and Associated Filtering Theory
Amirdjanova, A.; Kallianpur, G.
2002-12-19
The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.
Stochastic Analysis of Chemical Reaction Networks Using Linear Noise Approximation.
Cardelli, Luca; Kwiatkowska, Marta; Laurenti, Luca
2016-10-28
Stochastic evolution of Chemical Reactions Networks (CRNs) over time is usually analysed through solving the Chemical Master Equation (CME) or performing extensive simulations. Analysing stochasticity is often needed, particularly when some molecules occur in low numbers. Unfortunately, both approaches become infeasible if the system is complex and/or it cannot be ensured that initial populations are small. We develop a probabilistic logic for CRNs that enables stochastic analysis of the evolution of populations of molecular species. We present an approximate model checking algorithm based on the Linear Noise Approximation (LNA) of the CME, whose computational complexity is independent of the population size of each species and polynomial in the number of different species. The algorithm requires the solution of first order polynomial differential equations. We prove that our approach is valid for any CRN close enough to the thermodynamical limit. However, we show on four case studies that it can still provide good approximation even for low molecule counts. Our approach enables rigorous analysis of CRNs that are not analyzable by solving the CME, but are far from the deterministic limit. Moreover, it can be used for a fast approximate stochastic characterization of a CRN.
Stochastic analysis of Chemical Reaction Networks using Linear Noise Approximation.
Cardelli, Luca; Kwiatkowska, Marta; Laurenti, Luca
2016-11-01
Stochastic evolution of Chemical Reactions Networks (CRNs) over time is usually analyzed through solving the Chemical Master Equation (CME) or performing extensive simulations. Analysing stochasticity is often needed, particularly when some molecules occur in low numbers. Unfortunately, both approaches become infeasible if the system is complex and/or it cannot be ensured that initial populations are small. We develop a probabilistic logic for CRNs that enables stochastic analysis of the evolution of populations of molecular species. We present an approximate model checking algorithm based on the Linear Noise Approximation (LNA) of the CME, whose computational complexity is independent of the population size of each species and polynomial in the number of different species. The algorithm requires the solution of first order polynomial differential equations. We prove that our approach is valid for any CRN close enough to the thermodynamical limit. However, we show on four case studies that it can still provide good approximation even for low molecule counts. Our approach enables rigorous analysis of CRNs that are not analyzable by solving the CME, but are far from the deterministic limit. Moreover, it can be used for a fast approximate stochastic characterization of a CRN.
NASA Astrophysics Data System (ADS)
Lu, Z.; Porporato, A. M.
2012-12-01
seasonally dry areas, which are widely distributed in the world, are usually facing an intensive disparity between the lack of natural resource and the great demand of social development. In dry seasons of such areas, the distribution/allocation of water resource is an extremely critical and sensitive issue, and conflicts often occur due to lack of appropriate water allocation scheme. Among the many uses of water, the need of agricultural irrigation water is highly elastic, but this factor has not yet been made full use to free up water from agriculture use. The primary goal of this work is to design an optimal distribution scheme of water resource for dry seasons to maximize benefits from precious water resources, considering the high elasticity of agriculture water demand due to the dynamic of soil moisture affected by the uncertainty of precipitation and other factors like canopy interception. A dynamic programming model will be used to figure out an appropriate allocation of water resources among agricultural irrigation and other purposes like drinking water, industry, and hydropower, etc. In this dynamic programming model, we analytically quantify the dynamic of soil moisture in the agricultural fields by describing the interception with marked Poisson process and describing the rainfall depth with exponential distribution. Then, we figure out a water-saving irrigation scheme, which regulates the timetable and volumes of water in irrigation, in order to minimize irrigation water requirement under the premise of necessary crop yield (as a constraint condition). And then, in turn, we provide a scheme of water resource distribution/allocation among agriculture and other purposes, taking aim at maximizing benefits from precious water resources, or in other words, make best use of limited water resource.
Network-based stochastic competitive learning approach to disambiguation in collaborative networks
NASA Astrophysics Data System (ADS)
Christiano Silva, Thiago; Raphael Amancio, Diego
2013-03-01
Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods.
Network-based stochastic competitive learning approach to disambiguation in collaborative networks.
Christiano Silva, Thiago; Raphael Amancio, Diego
2013-03-01
Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods.
Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Stochastic model for market stocks with floors
NASA Astrophysics Data System (ADS)
Villarroel, Javier
2007-08-01
We present a model to describe the stochastic evolution of stocks that show a strong resistance at some level and generalize to this situation the evolution based upon geometric Brownian motion. If volatility and drift are related in a certain way we show that our model can be integrated in an exact way. The related problem of how to prize general securities that pay dividends at a continuous rate and earn a terminal payoff at maturity T is solved via the martingale probability approach.
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan
2010-01-01
For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.