Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Optimization of Monte Carlo transport simulations in stochastic media
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
Successful combination of the stochastic linearization and Monte Carlo methods
NASA Technical Reports Server (NTRS)
Elishakoff, I.; Colombi, P.
1993-01-01
A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo.
Cheon, Sooyoung; Liang, Faming
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.
NASA Astrophysics Data System (ADS)
Newell, Quentin Thomas
The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D fuel depletion analyses to compute quantities of interest in spent nuclear fuel including isotopic compositions. The Monte Carlo approach has not been fully embraced due to unresolved issues concerning the effect of Monte Carlo uncertainties on the predicted results. Use of the Monte Carlo method to solve the neutron transport equation introduces stochastic uncertainty in the computed fluxes. These fluxes are used to collapse cross sections, estimate power distributions, and deplete the fuel within depletion calculations; therefore, the predicted number densities contain random uncertainties from the Monte Carlo solution. These uncertainties can be compounded in time because of the extrapolative nature of depletion and decay calculations. The objective of this research was to quantify the stochastic uncertainty propagation of the flux uncertainty, introduced by the Monte Carlo method, to the number densities for the different isotopes in spent nuclear fuel due to multiple depletion time steps. The research derived a formula that calculates the standard deviation in the nuclide number densities based on propagating the statistical uncertainty introduced when using coupled Monte Carlo depletion computer codes. The research was developed with the use of the TRITON/KENO sequence of the SCALE computer code. The linear uncertainty nuclide group approximation (LUNGA) method developed in this research approximated the variance of ψN term, which is the variance in the flux shape due to uncertainty in the calculated nuclide number densities. Three different example problems were used in this research to calculate of the standard deviation in the nuclide number densities using the LUNGA method. The example problems showed that the LUNGA method is capable of calculating the standard deviation of the nuclide
Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians
Mason, D R; Rudd, R E; Sutton, A P
2003-10-13
We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.
Stochastic modelling of power reactor fuel behavior
NASA Astrophysics Data System (ADS)
Mirza, Shahid Nawaz
An understanding of the in-reactor behavior of nuclear fuel is essential to the safe and economic operation of a nuclear power plant. It is no longer possible to achieve this without computer code calculations. A state of art computer code, FRODO, for Fuel ROD Operation, has been developed to model the steady state behavior of fuel pins in a light water reactor and to do sensitivity analysis. FRODO concentrates on the thermal performance, fission product release and pellet-clad interaction and can be used to predict the fuel failure under the prevailing conditions. FRODO incorporates the numerous uncertainties involved in fuel behavior modeling, using statistical methods, to ascertain fuel failures and their causes. Sensitivity of fuel failure to different fuel parameters and reactor conditions can be easily evaluated. FRODO has been used to analyze the sensitivities of fuel failures to coolant flow reductions. It is found that the uncertainties have pronounced effects on conclusions about fuel failures and their causes.
Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application
Blunt, N. S. Kersten, J. A. F.; Smart, Simon D.; Spencer, J. S.; Booth, George H.; Alavi, Ali
2015-05-14
We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Stochastic sensitivity analysis of the biosphere model for Canadian nuclear fuel waste management
Reid, J.A.K.; Corbett, B.J. . Whiteshell Labs.)
1993-01-01
The biosphere model, BIOTRAC, was constructed to assess Canada's concept for nuclear fuel waste disposal in a vault deep in crystalline rock at some as yet undetermined location in the Canadian Shield. The model is therefore very general and based on the shield as a whole. BIOTRAC is made up of four linked submodels for surface water, soil, atmosphere, and food chain and dose. The model simulates physical conditions and radionuclide flows from the discharge of a hypothetical nuclear fuel waste disposal vault through groundwater, a well, a lake, air, soil, and plants to a critical group of individuals, i.e., those who are most exposed and therefore receive the highest dose. This critical group is totally self-sufficient and is represented by the International Commission for Radiological Protection reference man for dose prediction. BIOTRAC is a dynamic model that assumes steady-state physical conditions for each simulation, and deals with variation and uncertainty through Monte Carlo simulation techniques. This paper describes SENSYV, a technique for analyzing pathway and parameter sensitivities for the BIOTRAC code run in stochastic mode. Results are presented for [sup 129]I from the disposal of used fuel, and they confirm the importance of doses via the soil/plant/man and the air/plant/man ingestion pathways. The results also indicate that the lake/well water use switch, the aquatic iodine mass loading parameter, the iodine soil evasion rate, and the iodine plant/soil concentration ratio are important parameters.
ERIC Educational Resources Information Center
Gold, Michael Steven; Bentler, Peter M.
2000-01-01
Describes a Monte Carlo investigation of four methods for treating incomplete data: (1) resemblance based hot-deck imputation (RBHDI); (2) iterated stochastic regression imputation; (3) structured model expectation maximization; and (4) saturated model expectation maximization. Results favored the expectation maximization methods. (SLD)
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Müller, Eike H; Scheichl, Rob; Shardlow, Tony
2015-04-08
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.
Müller, Eike H.; Scheichl, Rob; Shardlow, Tony
2015-01-01
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075
Monte Carlo simulations of two-component drop growth by stochastic coalescence
NASA Astrophysics Data System (ADS)
Alfonso, L.; Raga, G. B.; Baumgardner, D.
2009-02-01
The evolution of two-dimensional drop distributions is simulated in this study using a Monte Carlo method. The stochastic algorithm of Gillespie (1976) for chemical reactions in the formulation proposed by Laurenzi et al. (2002) was used to simulate the kinetic behavior of the drop population. Within this framework, species are defined as droplets of specific size and aerosol composition. The performance of the algorithm was checked by a comparison with the analytical solutions found by Lushnikov (1975) and Golovin (1963) and with finite difference solutions of the two-component kinetic collection equation obtained for the Golovin (sum) and hydrodynamic kernels. Very good agreement was observed between the Monte Carlo simulations and the analytical and numerical solutions. A simulation for realistic initial conditions is presented for the hydrodynamic kernel. As expected, the aerosol mass is shifted from small to large particles due to collection process. This algorithm could be extended to incorporate various properties of clouds such several crystals habits, different types of soluble CCN, particle charging and drop breakup.
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; Barbier, Charlotte N.
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.
NASA Astrophysics Data System (ADS)
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
NASA Astrophysics Data System (ADS)
Shin, Seungho; Kim, Ah-Reum; Um, Sukkee
2016-02-01
A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.
A Monte Carlo based spent fuel analysis safeguards strategy assessment
Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P
2009-01-01
Safeguarding nuclear material involves the detection of diversions of significant quantities of nuclear materials, and the deterrence of such diversions by the risk of early detection. There are a variety of motivations for quantifying plutonium in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capabilities of the International Atomic Energy Agencies ability to safeguards nuclear facilities, shipper/receiver difference, input accountability at reprocessing facilities and burnup credit at repositories. Many NDA techniques exist for measuring signatures from spent fuel; however, no single NDA technique can, in isolation, quantify elemental plutonium and other actinides of interest in spent fuel. A study has been undertaken to determine the best integrated combination of cost effective techniques for quantifying plutonium mass in spent fuel for nuclear safeguards. A standardized assessment process was developed to compare the effective merits and faults of 12 different detection techniques in order to integrate a few techniques and to down-select among the techniques in preparation for experiments. The process involves generating a basis burnup/enrichment/cooling time dependent spent fuel assembly library, creating diversion scenarios, developing detector models and quantifying the capability of each NDA technique. Because hundreds of input and output files must be managed in the couplings of data transitions for the different facets of the assessment process, a graphical user interface (GUI) was development that automates the process. This GUI allows users to visually create diversion scenarios with varied replacement materials, and generate a MCNPX fixed source detector assessment input file. The end result of the assembly library assessment is to select a set of common source terms and diversion scenarios for quantifying the capability of each of the 12 NDA techniques. We present here the generalized
Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel
Jeraj, Robert; Zagar, Tomaz; Ravnik, Matjaz
2002-03-15
Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated k{sub eff} with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of {approx}3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned {sup 235}U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The k{sub eff} calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results.
NASA Astrophysics Data System (ADS)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Monte Carlo Bounding Techniques for Determining Solution Quality in Stochastic Programs
1999-01-01
Sci. 6 (1960) 197–204. [26] V.I. Norkin, G.Ch. P ug, A. Ruszczynski, A branch and bound method for stochastic global optimization, IIASA Working...linear problems, IIASA Working Paper 96-014, Laxenburg, Austria, February 1996. [31] S. Sen, R.D. Doverspike, S. Cosares, Network planning with random
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
A selective hybrid stochastic strategy for fuel-cell multi-parameter identification
NASA Astrophysics Data System (ADS)
Guarnieri, Massimo; Negro, Enrico; Di Noto, Vito; Alotto, Piergiorgio
2016-11-01
The in situ identification of fuel-cell material parameters is crucial both for guiding the research for advanced functionalized materials and for fitting multiphysics models, which can be used in fuel cell performance evaluation and optimization. However, this identification still remains challenging when dealing with direct measurements. This paper presents a method for achieving this aim by stochastic optimization. Such techniques have been applied to the analysis of fuel cells for ten years, but typically to specific problems and by means of semi-empirical models, with an increased number of articles published in the last years. We present an original formulation that makes use of an accurate zero-dimensional multi-physical model of a polymer electrolyte membrane fuel cell and of two cooperating stochastic algorithms, particle swarm optimization and differential evolution, to extract multiple material parameters (exchange current density, mass transfer coefficient, diffusivity, conductivity, activation barriers …) from the experimental data of polarization curves (i.e. in situ measurements) under some controlled temperature, gas back pressure and humidification. The method is suitable for application in other fields where fitting of multiphysics nonlinear models is involved.
Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations
Van Siclen, Clinton D
2007-02-01
A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.
Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.
Sormaz, Milos; Stamm, Tobias; Jenny, Patrick
2010-05-01
This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo.
Viral load and stochastic mutation in a Monte Carlo simulation of HIV
NASA Astrophysics Data System (ADS)
Ruskin, H. J.; Pandey, R. B.; Liu, Y.
2002-08-01
Viral load is examined, as a function of primary viral growth factor ( Pg) and mutation, through a computer simulation model for HIV immune response. Cell-mediated immune response is considered on a cubic lattice with four cell types: macrophage ( M), helper ( H), cytotoxic ( C), and virus ( V). Rule-based interactions are used with random sequential update of the binary cellular states. The relative viral load (the concentration of virus with respect to helper cells) is found to increase with the primary viral growth factor above a critical value ( Pc), leading to a phase transition from immuno-competent to immuno-deficient state. The critical growth factor ( Pc) seems to depend on mobility and mutation. The stochastic growth due to mutation is found to depend non-monotonically on the relative viral load, with a maximum at a characteristic load which is lower for stronger viral growth.
NASA Astrophysics Data System (ADS)
McDonough, Kevin K.
The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method
NASA Astrophysics Data System (ADS)
Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.
2000-07-01
This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.
A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization
NASA Astrophysics Data System (ADS)
Chakraborty, Uttara
2012-10-01
This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.
The costs of production of alternative jet fuel: A harmonized stochastic assessment.
Bann, Seamus J; Malina, Robert; Staples, Mark D; Suresh, Pooja; Pearlson, Matthew; Tyner, Wallace E; Hileman, James I; Barrett, Steven
2017-03-01
This study quantifies and compares the costs of production for six alternative jet fuel pathways using consistent financial and technical assumptions. Uncertainty was propagated through the analysis using Monte Carlo simulations. The six processes assessed were HEFA, advanced fermentation, Fischer-Tropsch, aqueous phase processing, hydrothermal liquefaction, and fast pyrolysis. The results indicate that none of the six processes would be profitable in the absence of government incentives, with HEFA using yellow grease, HEFA using tallow, and FT revealing the lowest mean jet fuel prices at $0.91/liter ($0.66/liter-$1.24/liter), $1.06/liter ($0.79/liter-$1.42/liter), and $1.15/liter ($0.95/liter-$1.39/liter), respectively. This study also quantifies plant performance in the United States with a Renewable Fuel Standard policy analysis. Results indicate that some pathways could achieve positive NPV with relatively high likelihood under existing policy supports, with HEFA and FPH revealing the highest probability of positive NPV at 94.9% and 99.7%, respectively, in the best-case scenario.
Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle
NASA Astrophysics Data System (ADS)
de Bellefon, G. M.; Wirth, B. D.
2011-06-01
A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).
NASA Astrophysics Data System (ADS)
McNab, Walt W.
2001-02-01
Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.
A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel
DeHart, M.D.
2001-08-24
This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.
NASA Astrophysics Data System (ADS)
Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun
2016-09-01
This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.
NASA Astrophysics Data System (ADS)
Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song
2015-09-01
An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management.
Plante, Ianik; Ponomarev, Artem; Cucinotta, Francis A
2011-02-01
The description of energy deposition by high charge and energy (HZE) nuclei is of importance for space radiation risk assessment and due to their use in hadrontherapy. Such ions deposit a large fraction of their energy within the so-called core of the track and a smaller proportion in the penumbra (or track periphery). We study the stochastic patterns of the radial dependence of energy deposition using Monte Carlo track structure codes RITRACKS and RETRACKS, that were used to simulate HZE tracks and calculate energy deposition in voxels of 40 nm. The simulation of a (56)Fe(26+) ion of 1 GeV u(-1) revealed zones of high-energy deposition which maybe found as far as a few millimetres away from the track core in some simulations. The calculation also showed that ∼43 % of the energy was deposited in the penumbra. These 3D stochastic simulations combined with a visualisation interface are a powerful tool for biophysicists which may be used to study radiation-induced biological effects such as double strand breaks and oxidative damage and the subsequent cellular and tissue damage processing and signalling.
NASA Astrophysics Data System (ADS)
Iqbal, M. Javed; Mirza, Nasir M.; Mirza, Sikander M.
2008-01-01
During normal operation of PWRs, routine fuel rods failures result in release of radioactive fission products (RFPs) in the primary coolant of PWRs. In this work, a stochastic model has been developed for simulation of failure time sequences and release rates for the estimation of fission product activity in primary coolant of a typical PWR under power perturbations. In the first part, a stochastic approach is developed, based on generation of fuel failure event sequences by sampling the time dependent intensity functions. Then a three-stage model based deterministic methodology of the FPCART code has been extended to include failure sequences and random release rates in a computer code FPCART-ST, which uses state-of-the-art LEOPARD and ODMUG codes as its subroutines. The value of the 131I activity in primary coolant predicted by FPCART-ST code has been found in good agreement with the corresponding values measured at ANGRA-1 nuclear power plant. The predictions of FPCART-ST code with constant release option have also been found to have good agreement with corresponding experimental values for time dependent 135I, 135Xe and 89Kr concentrations in primary coolant measured during EDITHMOX-1 experiments.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
NASA Astrophysics Data System (ADS)
Chirici, G.; Scotti, R.; Montaghi, A.; Barbati, A.; Cartisano, R.; Lopez, G.; Marchetti, M.; McRoberts, R. E.; Olsson, H.; Corona, P.
2013-12-01
This paper presents an application of Airborne Laser Scanning (ALS) data in conjunction with an IRS LISS-III image for mapping forest fuel types. For two study areas of 165 km2 and 487 km2 in Sicily (Italy), 16,761 plots of size 30-m × 30-m were distributed using a tessellation-based stratified sampling scheme. ALS metrics and spectral signatures from IRS extracted for each plot were used as predictors to classify forest fuel types observed and identified by photointerpretation and fieldwork. Following use of traditional parametric methods that produced unsatisfactory results, three non-parametric classification approaches were tested: (i) classification and regression tree (CART), (ii) the CART bagging method called Random Forests, and (iii) the CART bagging/boosting stochastic gradient boosting (SGB) approach. This contribution summarizes previous experiences using ALS data for estimating forest variables useful for fire management in general and for fuel type mapping, in particular. It summarizes characteristics of classification and regression trees, presents the pre-processing operation, the classification algorithms, and the achieved results. The results demonstrated superiority of the SGB method with overall accuracy of 84%. The most relevant ALS metric was canopy cover, defined as the percent of non-ground returns. Other relevant metrics included the spectral information from IRS and several other ALS metrics such as percentiles of the height distribution, the mean height of all returns, and the number of returns.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
Fulger, Daniel; Scalas, Enrico; Germano, Guido
2008-02-01
We present a numerical method for the Monte Carlo simulation of uncoupled continuous-time random walks with a Lévy alpha -stable distribution of jumps in space and a Mittag-Leffler distribution of waiting times, and apply it to the stochastic solution of the Cauchy problem for a partial differential equation with fractional derivatives both in space and in time. The one-parameter Mittag-Leffler function is the natural survival probability leading to time-fractional diffusion equations. Transformation methods for Mittag-Leffler random variables were found later than the well-known transformation method by Chambers, Mallows, and Stuck for Lévy alpha -stable random variables and so far have not received as much attention; nor have they been used together with the latter in spite of their mathematical relationship due to the geometric stability of the Mittag-Leffler distribution. Combining the two methods, we obtain an accurate approximation of space- and time-fractional diffusion processes almost as easy and fast to compute as for standard diffusion processes.
NASA Astrophysics Data System (ADS)
Zhang, Yanxiang; Ni, Meng; Yan, Mufu; Chen, Fanglin
2015-12-01
Nanostructured electrodes are widely used for low temperature solid oxide fuel cells, due to their remarkably high activity. However, the industrial applications of the infiltrated electrodes are hindered by the durability issues, such as the microstructure stability against thermal aging. Few strategies are available to overcome this challenge due to the limited knowledge about the coarsening kinetics of the infiltrated electrodes and how the potentially important factors affect the stability. In this work, the generic thermal aging kinetics of the three-dimensional microstructures of the infiltrate electrodes is investigated by a kinetic Monte Carlo simulation model considering surface diffusion mechanism. Effects of temperature, infiltration loading, wettability, and electrode configuration are studied and the key geometric parameters are calculated such as the infiltrate particle size, the total and percolated quantities of three-phase boundary length and infiltrate surface area, and the tortuosity factor of infiltrate network. Through parametric study, several strategies to improve the thermal aging stability are proposed.
NASA Astrophysics Data System (ADS)
Burdo, James S.
This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Random-Walk Monte Carlo Simulation of Intergranular Gas Bubble Nucleation in UO2 Fuel
Yongfeng Zhang; Michael R. Tonks; S. B. Biner; D.A. Andersson
2012-11-01
Using a random-walk particle algorithm, we investigate the clustering of fission gas atoms on grain bound- aries in oxide fuels. The computational algorithm implemented in this work considers a planar surface representing a grain boundary on which particles appear at a rate dictated by the Booth flux, migrate two dimensionally according to their grain boundary diffusivity, and coalesce by random encounters. Specifically, the intergranular bubble nucleation density is the key variable we investigate using a parametric study in which the temperature, grain boundary gas diffusivity, and grain boundary segregation energy are varied. The results reveal that the grain boundary bubble nucleation density can vary widely due to these three parameters, which may be an important factor in the observed variability in intergranular bubble percolation among grain boundaries in oxide fuel during fission gas release.
NASA Astrophysics Data System (ADS)
Tsinko, Y.; Johnson, E. A.; Martin, Y. E.
2014-12-01
Natural range of variability of forest fire frequency is of great interest due to the current changing climate and seeming increase in the number of fires. The variability of the annual area burned in Canada has not been stable in the 20th century. Recently, these changes have been linked to large scale climate cycles, such as Pacific Decadal Oscillation (PDO) phases and El Nino Southern Oscillation (ENSO). The positive phase of the PDO was associated with the increased probability of hot dry spells leading to drier fuels and increased area burned. However, so far only one historical timeline was used to assess correlations between the natural climate oscillations and forest fire frequency. To counteract similar problems, weather generators are extensively used in hydrological and agricultural modeling to extend short instrumental record and to synthesize long sequences of daily weather parameters that are different from but statistically similar to historical weather. In the current study synthetic weather models were used to assess effects of alternative weather timelines on fuel moisture in Canada by using Canadian Forest Fire Weather Index moisture codes and potential fire frequency. The variability of fuel moisture codes was found to increase with the increased length of simulated series, thus indicating that the natural range of variability of forest fire frequency may be larger than that calculated from available short records. It may be viewed as a manifestation of a Hurst effect. Since PDO phases are thought to be caused by diverse mechanisms including overturning oceanic circulation, some of the lower frequency signals may be attributed to the long term memory of the oceanic system. Thus, care must be taken when assessing natural variability of climate dependent processes without accounting for potential long-term mechanisms.
Shedlock, Daniel; Haghighat, Alireza
2005-01-01
In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF
Monte Carlo Reliability Analysis.
1987-10-01
to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction
NASA Astrophysics Data System (ADS)
Sabelfeld, K. K.
2015-09-01
A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.
NASA Astrophysics Data System (ADS)
Tayarani-Yoosefabadi, Z.; Harvey, D.; Bellerive, J.; Kjeang, E.
2016-01-01
Gas diffusion layer (GDL) materials in polymer electrolyte membrane fuel cells (PEMFCs) are commonly made hydrophobic to enhance water management by avoiding liquid water blockage of the pores and facilitating reactant gas transport to the adjacent catalyst layer. In this work, a stochastic microstructural modeling approach is developed to simulate the transport properties of a commercial carbon paper based GDL under a range of PTFE loadings and liquid water saturation levels. The proposed novel stochastic method mimics the GDL manufacturing process steps and resolves all relevant phases including fiber, binder, PTFE, liquid water, and gas. After thorough validation of the general microstructure with literature and in-house data, a comprehensive set of anisotropic transport properties is simulated for the reconstructed GDL in different PTFE loadings and liquid water saturation levels and validated through a comparison with in-house ex situ experimental data and empirical formulations. In general, the results show good agreement between simulated and measured data. Decreasing trends in porosity, gas diffusivity, and permeability is obtained by increasing the PTFE loading and liquid water content, while the thermal conductivity is found to increase with liquid water saturation. Using the validated model, new correlations for saturation dependent GDL properties are proposed.
Rogers, Kristin; Seager, Thomas P
2009-03-15
Life cycle impact assessment (LCIA) involves weighing trade-offs between multiple and incommensurate criteria. Current state-of-the-art LCIA tools typically compute an overall environmental score using a linear-weighted aggregation of characterized inventory data that has been normalized relative to total industry, regional, or national emissions. However, current normalization practices risk masking impacts that may be significant within the context of the decision, albeit small relative to the reference data (e.g., total U.S. emissions). Additionally, uncertainty associated with quantification of weights is generally very high. Partly for these reasons, many LCA studies truncate impact assessment at the inventory characterization step, rather than completing normalization and weighting steps. This paper describes a novel approach called stochastic multiattribute life cycle impact assessment (SMA-LCIA) that combines an outranking approach to normalization with stochastic exploration of weight spaces-avoiding some of the drawbacks of current LCIA methods. To illustrate the new approach, SMA-LCIA is compared with a typical LCIA method for crop-based, fossil-based, and electric fuels using the Greenhouse gas Regulated Emissions and Energy Use in Transportation (GREET) model for inventory data and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) model for data characterization. In contrast to the typical LCIA case, in which results are dominated by fossil fuel depletion and global warming considerations regardless of criteria weights, the SMA-LCIA approach results in a rank ordering that is more sensitive to decisionmaker preferences. The principal advantage of the SMA-LCIA method is the ability to facilitate exploration and construction of context-specific criteria preferences by simultaneously representing multiple weights spaces and the sensitivity of the rank ordering to uncertain stakeholder values.
NASA Astrophysics Data System (ADS)
Polanski, A.; Barashenkov, V.; Puzynin, I.; Rakhno, I.; Sissakian, A.
It is considered a sub-critical assembly driven with existing 660 MeV JINR proton accelerator. The assembly consists of a central cylindrical lead target surrounded with a mixed-oxide (MOX) fuel (PuO2 + UO2) and with reflector made of beryllium. Dependence of the energetic gain on the proton energy, the neutron multiplication coefficient, and the neutron energetic spectra have been calculated. It is shown that for subcritical assembly with a mixed-oxide (MOX) BN-600 fuel (28%PuO 2 + 72%UO2) with effective density of fuel material equal to 9 g/cm 3 , the multiplication coefficient keff is equal to 0.945, the energetic gain is equal to 27, and the neutron flux density is 1012 cm˜2 s˜x for the protons with energy of 660 MeV and accelerator beam current of 1 uA.
Stochastic Optimization of Complex Systems
Birge, John R.
2014-03-20
This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.
Controlled Stochastic Dynamical Systems
2007-04-18
the existence of value functions of two-player zero-sum stochastic differential games Indiana Univ. Math. Journal, 38 (1989), pp 293-314. [6] George ...control problems, Adv. Appl. Prob., 15, (1983) pp 225-254. [10] Karatzas, I. Ocone, D., Wang, H. and Zervos , M., Finite fuel singular control with
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Technical Reports Server (NTRS)
Jahshan, S. N.; Singleterry, R. C.
2001-01-01
The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue,
A multilevel stochastic collocation method for SPDEs
Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton
2015-03-10
We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Semistochastic Projector Monte Carlo Method
NASA Astrophysics Data System (ADS)
Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.
2012-12-01
We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.
Stochastic models: theory and simulation.
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Solan, Eilon; Vieille, Nicolas
2015-01-01
In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883
QB1 - Stochastic Gene Regulation
Munsky, Brian
2012-07-23
Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.
Boyarinov, V. F.; Davidenko, V. D.; Nevinitsa, V. A.; Tsibulsky, V. F.
2006-07-01
Verification of the SUHAM-U code has been carried out by the calculation of two-dimensional benchmark-experiment on critical light-water facility VENUS-2. Comparisons with experimental data and calculations by Monte-Carlo code UNK with the same nuclear data library B645 for basic isotopes have been fulfilled. Calculations of two-dimensional facility were carried out with using experimentally measured buckling values. Possibility of SUHAM code application for computations of PWR reactor with uranium and MOX fuel has been demonstrated. (authors)
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Lazopoulos, Achilleas
2006-07-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Comparing Several Robust Tests of Stochastic Equality.
ERIC Educational Resources Information Center
Vargha, Andras; Delaney, Harold D.
In this paper, six statistical tests of stochastic equality are compared with respect to Type I error and power through a Monte Carlo simulation. In the simulation, the skewness and kurtosis levels and the extent of variance heterogeneity of the two parent distributions were varied across a wide range. The sample sizes applied were either small or…
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
Stochastic-field cavitation model
Dumond, J.; Magagnato, F.; Class, A.
2013-07-15
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
Stochastic solution to quantum dynamics
NASA Technical Reports Server (NTRS)
John, Sarah; Wilson, John W.
1994-01-01
The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.
NASA Astrophysics Data System (ADS)
Ross, D. K.; Moreau, William
1995-08-01
We investigate stochastic gravity as a potentially fruitful avenue for studying quantum effects in gravity. Following the approach of stochastic electrodynamics ( sed), as a representation of the quantum gravity vacuum we construct a classical state of isotropic random gravitational radiation, expressed as a spin-2 field,h µυ (x), composed of plane waves of random phase on a flat spacetime manifold. Requiring Lorentz invariance leads to the result that the spectral composition function of the gravitational radiation,h(ω), must be proportional to 1/ω 2. The proportionality constant is determined by the Planck condition that the energy density consist ofħω/2 per normal mode, and this condition sets the amplitude scale of the random gravitational radiation at the order of the Planck length, giving a spectral composition functionh(ω) =√16πc 2Lp/ω2. As an application of stochastic gravity, we investigate the Davies-Unruh effect. We calculate the two-point correlation function (R iojo(Oτ-δτ/2)R kolo(O,τ+δτ/2)) of the measureable geodesic deviation tensor field,R iojo, for two situations: (i) at a point detector uniformly accelerating through the random gravitational radiation, and (ii) at an inertial detector in a heat bath of the random radiation at a finite temperature. We find that the two correlation functions agree to first order inaδτ/c provided that the temperature and acceleration satisfy the relationkT=ħa/2πc.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Interaction picture density matrix quantum Monte Carlo
Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Master-equation approach to stochastic neurodynamics
NASA Astrophysics Data System (ADS)
Ohira, Toru; Cowan, Jack D.
1993-09-01
A master-equation approach to the stochastic neurodynamics proposed by Cowan [in Advances in Neural Information Processing Systems 3, edited by R. P. Lippman, J. E. Moody, and D. S. Touretzky (Morgan Kaufmann, San Mateo, 1991), p. 62] is investigated in this paper. We deal with a model neural network that is composed of two-state neurons obeying elementary stochastic transition rates. We show that such an approach yields concise expressions for multipoint moments and an equation of motion. We apply the formalism to a (1+1)-dimensional system. Exact and approximate expressions for various statistical parameters are obtained and compared with Monte Carlo simulations.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Blaskiewicz, M.
2011-01-01
Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.
Parallel Monte Carlo Simulation for control system design
NASA Technical Reports Server (NTRS)
Schubert, Wolfgang M.
1995-01-01
The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.
Stochastic light-cone CTMRG: a new DMRG approach to stochastic models
NASA Astrophysics Data System (ADS)
Kemper, A.; Gendiar, A.; Nishino, T.; Schadschneider, A.; Zittartz, J.
2003-01-01
We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 105 shows a considerable improvement on the old stochastic TMRG algorithm.
Improved Collision Modeling for Direct Simulation Monte Carlo Methods
2011-03-01
number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic
Stochastic modeling of carbon oxidation
Chen, W.Y,; Kulkarni, A.; Milum, J.L.; Fan, L.T.
1999-12-01
Recent studies of carbon oxidation by scanning tunneling microscopy indicate that measured rates of carbon oxidation can be affected by randomly distributed defects in the carbon structure, which vary in size. Nevertheless, the impact of this observation on the analysis or modeling of the oxidation rate has not been critically assessed. This work focuses on the stochastic analysis of the dynamics of carbon clusters' conversions during the oxidation of a carbon sheet. According to the classic model of Nagle and Strickland-Constable (NSC), two classes of carbon clusters are involved in three types of reactions: gasification of basal-carbon clusters, gasification of edge-carbon clusters, and conversion of the edge-carbon clusters to the basal-carbon clusters due to the thermal annealing. To accommodate the dilution of basal clusters, however, the NSC model is modified for the later stage of oxidation in this work. Master equations governing the numbers of three classes of carbon clusters, basal, edge and gasified, are formulated from stochastic population balance. The stochastic pathways of three different classes of carbon during oxidation, that is, their means and the fluctuations around these means, have been numerically simulated independently by the algorithm derived from the master equations, as well as by an event-driven Monte Carlo algorithm. Both algorithms have given rise to identical results.
Stochastic histories of refractory interstellar dust
NASA Technical Reports Server (NTRS)
Liffman, Kurt; Clayton, Donald D.
1988-01-01
Histories of refractory interstellar dust particles (IDPs) are calculated. The profile of a particle population is assembled from a large number of stochastic, or Monte Carlo, histories of single particles; the probabilities for each of the events that may befall a given particle are specified, and the particle's history is unfolded by a sequence of random numbers. The assumptions that are made and the techniques of the calculation are described together with the results obtained. Several technical demonstrations are presented.
Stochastic approximation boosting for incomplete data problems.
Sexton, Joseph; Laake, Petter
2009-12-01
Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Primal and Dual Integrated Force Methods Used for Stochastic Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.
2005-01-01
At the NASA Glenn Research Center, the primal and dual integrated force methods are being extended for the stochastic analysis of structures. The stochastic simulation can be used to quantify the consequence of scatter in stress and displacement response because of a specified variation in input parameters such as load (mechanical, thermal, and support settling loads), material properties (strength, modulus, density, etc.), and sizing design variables (depth, thickness, etc.). All the parameters are modeled as random variables with given probability distributions, means, and covariances. The stochastic response is formulated through a quadratic perturbation theory, and it is verified through a Monte Carlo simulation.
Collisionally induced stochastic dynamics of fast ions in solids
Burgdoerfer, J.
1989-01-01
Recent developments in the theory of excited state formation in collisions of fast highly charged ions with solids are reviewed. We discuss a classical transport theory employing Monte-Carlo sampling of solutions of a microscopic Langevin equation. Dynamical screening by the dielectric medium as well as multiple collisions are incorporated through the drift and stochastic forces in the Langevin equation. The close relationship between the extrinsically stochastic dynamics described by the Langevin and the intrinsic stochasticity in chaotic nonlinear dynamical systems is stressed. Comparison with experimental data and possible modification by quantum corrections are discussed. 49 refs., 11 figs.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
2012-08-01
AFRL-RX-WP-TP-2012-0397 INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD ...SUBTITLE INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD (PREPRINT) 5a. CONTRACT...a stochastic inverse methodology arising in electromagnetic imaging. Nondestructive testing using guided microwaves covers a wide range of
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
A heterogeneous stochastic FEM framework for elliptic PDEs
Hou, Thomas Y. Liu, Pengfei
2015-01-15
We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage.
NASA Technical Reports Server (NTRS)
Ponomarev, Artem; Cucinotta, F.
2011-01-01
To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
Molecular Motors and Stochastic Models
NASA Astrophysics Data System (ADS)
Lipowsky, Reinhard
The behavior of single molecular motors such as kinesin or myosin V, which move on linear filaments, involves a nontrivial coupling between the biochemical motor cycle and the stochastic movement. This coupling can be studied in the framework of nonuniform ratchet models which are characterized by spatially localized transition rates between the different internal states of the motor. These models can be classified according to their functional relationships between the motor velocity and the concentration of the fuel molecules. The simplest such relationship applies to two subclasses of models for dimeric kinesin and agrees with experimental observations on this molecular motor.
Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.
2009-05-04
After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.
Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.
Microtubules: dynamically unstable stochastic phase-switching polymers
NASA Astrophysics Data System (ADS)
Zakharov, P. N.; Arzhanik, V. K.; Ulyanov, E. V.; Gudimchuk, N. B.; Ataullakhanov, F. I.
2016-08-01
One of the simplest molecular motors, a biological microtubule, is reviewed as an example of a highly nonequilibrium molecular machine capable of stochastic transitions between slow growth and rapid disassembly phases. Basic properties of microtubules are described, and various approaches to simulating their dynamics, from statistical chemical kinetics models to molecular dynamics models using the Metropolis Monte Carlo and Brownian dynamics methods, are outlined.
Algebraic, geometric, and stochastic aspects of genetic operators
NASA Technical Reports Server (NTRS)
Foo, N. Y.; Bosworth, J. L.
1972-01-01
Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.
Kalos, M.
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
Stochastic differential equations
Sobczyk, K. )
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshore structures.
Stochastic Collocation Method for Three-dimensional Groundwater Flow
NASA Astrophysics Data System (ADS)
Shi, L.; Zhang, D.
2008-12-01
The stochastic collocation method (SCM) has recently gained extensive attention in several disciplines. The numerical implementation of SCM only requires repetitive runs of an existing deterministic solver or code as in the Monte Carlo simulation. But it is generally much more efficient than the Monte Carlo method. In this paper, the stochastic collocation method is used to efficiently qualify uncertainty of three-dimensional groundwater flow. We introduce the basic principles of common collocation methods, i.e., the tensor product collocation method (TPCM), Smolyak collocation method (SmCM), Stround-2 collocation method (StCM), and probability collocation method (PCM). Their accuracy, computational cost, and limitation are discussed. Illustrative examples reveal that the seamless combination of collocation techniques and existing simulators makes the new framework possible to efficiently handle complex stochastic problems.
Stochastic symmetries of Wick type stochastic ordinary differential equations
NASA Astrophysics Data System (ADS)
Ünal, Gazanfer
2015-04-01
We consider Wick type stochastic ordinary differential equations with Gaussian white noise. We define the stochastic symmetry transformations and Lie equations in Kondratiev space (S)-1N. We derive the determining system of Wick type stochastic partial differential equations with Gaussian white noise. Stochastic symmetries for stochastic Bernoulli, Riccati and general stochastic linear equation in (S)-1N are obtained. A stochastic version of canonical variables is also introduced.
Dynamic Response Analysis of Fuzzy Stochastic Truss Structures under Fuzzy Stochastic Excitation
NASA Astrophysics Data System (ADS)
Ma, Juan; Chen, Jian-Jun; Gao, Wei
2006-08-01
A novel method (Fuzzy factor method) is presented, which is used in the dynamic response analysis of fuzzy stochastic truss structures under fuzzy stochastic step loads. Considering the fuzzy randomness of structural physical parameters, geometric dimensions and the amplitudes of step loads simultaneously, fuzzy stochastic dynamic response of the truss structures is developed using the mode superposition method and fuzzy factor method. The fuzzy numerical characteristics of dynamic response are then obtained by using the random variable’s moment method and the algebra synthesis method. The influences of the fuzzy randomness of structural physical parameters, geometric dimensions and step load on the fuzzy randomness of the dynamic response are demonstrated via an engineering example, and Monte-Carlo method is used to simulate this example, verifying the feasibility and validity of the modeling and method given in this paper.
NASA Astrophysics Data System (ADS)
Sheehan, T.; Bachelet, D. M.; Ferschweiler, K.
2015-12-01
The MC2 dynamic global vegetation model fire module simulates fire occurrence, area burned, and fire impacts including mortality, biomass burned, and nitrogen volatilization. Fire occurrence is based on fuel load levels and vegetation-specific thresholds for three calculated fire weather indices: fine fuel moisture code (FFMC) for the moisture content of fine fuels; build-up index (BUI) for the total amount of fuel available for combustion; and energy release component (ERC) for the total energy available to fire. Ignitions are assumed (i.e. the probability of an ignition source is 1). The model is run with gridded inputs and the fraction of each grid cell burned is limited by a vegetation-specific fire return period (FRP) and the number of years since the last fire occurred in the grid cell. One consequence of assumed ignitions FRP constraint is that similar fire behavior can take place over large areas with identical vegetation type. In regions where thresholds are often exceeded, fires occur frequently (annually in some instances) with a very low fraction of a cell burned. In areas where fire is infrequent, a single hot, dry climate event can result in intense fire over a large region. Both cases can potentially result in large areas with uniform vegetation type and age. To better reflect realistic fire occurrence, we have developed a stochastic fire occurrence model that: a) uses a map of relative ignition probability and a multiplier to alter overall ignition occurrence; b) adjusts the original fixed fire thresholds with ignition success probabilities based on fire weather indices; and c) calculates spread by using a probability based on slope and wind direction. A Monte Carlo method is used with all three algorithms to determine occurrence. The new stochastic ignition approach yields more variety in fire intensity, a smaller annual total of cells burned, and patchier vegetation.
A Monte Carlo approach to water management
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2012-04-01
Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs
Markov chain Monte Carlo without likelihoods.
Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon
2003-12-23
Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-01-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N coupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble of N variablesmore » subject to a conservation principle. Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
Tuthill, Richard S; Davis, Dustin W; Dai, Zhongtao
2015-02-03
A disclosed fuel injector provides mixing of fuel with airflow by surrounding a swirled fuel flow with first and second swirled airflows that ensures mixing prior to or upon entering the combustion chamber. Fuel tubes produce a central fuel flow along with a central airflow through a plurality of openings to generate the high velocity fuel/air mixture along the axis of the fuel injector in addition to the swirled fuel/air mixture.
HTGR Unit Fuel Pebble k-infinity Results Using Chord Length Sampling
T.J. Donovan; Y. Danon
2003-06-16
There is considerable interest in transport models that will permit the simulation of neutral particle transport through stochastic mixtures. Chord length sampling techniques that simulate particle transport through binary stochastic mixtures consisting of spheres randomly arranged in a matrix have been implemented in several Monte Carlo Codes [1-3]. Though the use of these methods is growing, the accuracy and efficiency of these methods has not yet been thoroughly demonstrated for an application of particular interest--a high temperature gas reactor fuel pebble element. This paper presents comparison results of k-infinity calculations performed on a LEUPRO-1 pebble cell. Results are generated using a chord length sampling method implemented in a test version of MCNP [3]. This Limited Chord Length Sampling (LCLS) method eliminates the need to model the details of the micro-heterogeneity of the pebble. Results are also computed for an explicit pebble model where the TRISO fuel particles within the pebble are randomly distributed. Finally, the heterogeneous matrix region of the pebble cell is homogenized based simply on volume fractions. These three results are compared to results reported by Johnson et al [4], and duplicated here, using a cubic lattice representation of the TRISO fuel particles. Figures of Merit for the four k-infinity calculations are compared to judge relative efficiencies.
Stochastic Simulation Tool for Aerospace Structural Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F.; Moore, David F.
2006-01-01
Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.
Stochastic robustness of linear control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ryan, Laura E.
1990-01-01
A simple numerical procedure for estimating the stochastic robustness of a linear, time-invariant system is described. Monte Carlo evaluation of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This definition of robustness is an alternative to existing deterministic definitions that address both structured and unstructured parameter variations directly. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variations. Trivial extensions of the procedure admit alternate discriminants to be considered. Thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions also can be estimated. Results are particularly amenable to graphical presentation.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Stochastic longshore current dynamics
NASA Astrophysics Data System (ADS)
Restrepo, Juan M.; Venkataramani, Shankar
2016-12-01
We develop a stochastic parametrization, based on a 'simple' deterministic model for the dynamics of steady longshore currents, that produces ensembles that are statistically consistent with field observations of these currents. Unlike deterministic models, stochastic parameterization incorporates randomness and hence can only match the observations in a statistical sense. Unlike statistical emulators, in which the model is tuned to the statistical structure of the observation, stochastic parametrization are not directly tuned to match the statistics of the observations. Rather, stochastic parameterization combines deterministic, i.e physics based models with stochastic models for the "missing physics" to create hybrid models, that are stochastic, but yet can be used for making predictions, especially in the context of data assimilation. We introduce a novel measure of the utility of stochastic models of complex processes, that we call consistency of sensitivity. A model with poor consistency of sensitivity requires a great deal of tuning of parameters and has a very narrow range of realistic parameters leading to outcomes consistent with a reasonable spectrum of physical outcomes. We apply this metric to our stochastic parametrization and show that, the loss of certainty inherent in model due to its stochastic nature is offset by the model's resulting consistency of sensitivity. In particular, the stochastic model still retains the forward sensitivity of the deterministic model and hence respects important structural/physical constraints, yet has a broader range of parameters capable of producing outcomes consistent with the field data used in evaluating the model. This leads to an expanded range of model applicability. We show, in the context of data assimilation, the stochastic parametrization of longshore currents achieves good results in capturing the statistics of observation that were not used in tuning the model.
A Stochastic Employment Problem
ERIC Educational Resources Information Center
Wu, Teng
2013-01-01
The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…
Characterizing model uncertainties in the life cycle of lignocellulose-based ethanol fuels.
Spatari, Sabrina; MacLean, Heather L
2010-11-15
Renewable and low carbon fuel standards being developed at federal and state levels require an estimation of the life cycle carbon intensity (LCCI) of candidate fuels that can substitute for gasoline, such as second generation bioethanol. Estimating the LCCI of such fuels with a high degree of confidence requires the use of probabilistic methods to account for known sources of uncertainty. We construct life cycle models for the bioconversion of agricultural residue (corn stover) and energy crops (switchgrass) and explicitly examine uncertainty using Monte Carlo simulation. Using statistical methods to identify significant model variables from public data sets and Aspen Plus chemical process models,we estimate stochastic life cycle greenhouse gas (GHG) emissions for the two feedstocks combined with two promising fuel conversion technologies. The approach can be generalized to other biofuel systems. Our results show potentially high and uncertain GHG emissions for switchgrass-ethanol due to uncertain CO₂ flux from land use change and N₂O flux from N fertilizer. However, corn stover-ethanol,with its low-in-magnitude, tight-in-spread LCCI distribution, shows considerable promise for reducing life cycle GHG emissions relative to gasoline and corn-ethanol. Coproducts are important for reducing the LCCI of all ethanol fuels we examine.
Stochastic volatility of the futures prices of emission allowances: A Bayesian approach
NASA Astrophysics Data System (ADS)
Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin
2017-01-01
Understanding the stochastic nature of the spot volatility of emission allowances is crucial for risk management in emissions markets. In this study, by adopting a stochastic volatility model with or without jumps to represent the dynamics of European Union Allowances (EUA) futures prices, we estimate the daily volatilities and model parameters by using the Markov Chain Monte Carlo method for stochastic volatility (SV), stochastic volatility with return jumps (SVJ) and stochastic volatility with correlated jumps (SVCJ) models. Our empirical results reveal three important features of emissions markets. First, the data presented herein suggest that EUA futures prices exhibit significant stochastic volatility. Second, the leverage effect is noticeable regardless of whether or not jumps are included. Third, the inclusion of jumps has a significant impact on the estimation of the volatility dynamics. Finally, the market becomes very volatile and large jumps occur at the beginning of a new phase. These findings are important for policy makers and regulators.
Solution of the stochastic control problem in unbounded domains.
NASA Technical Reports Server (NTRS)
Robinson, P.; Moore, J.
1973-01-01
Bellman's dynamic programming equation for the optimal index and control law for stochastic control problems is a parabolic or elliptic partial differential equation frequently defined in an unbounded domain. Existing methods of solution require bounded domain approximations, the application of singular perturbation techniques or Monte Carlo simulation procedures. In this paper, using the fact that Poisson impulse noise tends to a Gaussian process under certain limiting conditions, a method which achieves an arbitrarily good approximate solution to the stochastic control problem is given. The method uses the two iterative techniques of successive approximation and quasi-linearization and is inherently more efficient than existing methods of solution.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-05-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient.
A Stochastic Cratering Model for Asteroid Surfaces
NASA Technical Reports Server (NTRS)
Richardson, J. E.; Melosh, H. J.; Greenberg, R. J.
2005-01-01
The observed cratering records on asteroid surfaces (four so far: Gaspra, Ida, Mathilde, and Eros [1-4]) provide us with important clues to their past bombardment histories. Previous efforts toward interpreting these records have led to two basic modeling styles for reproducing the statistics of the observed crater populations. The first, and most direct, method is to use Monte Carlo techniques [5] to stochastically populate a matrix-model test surface with craters as a function of time [6,7]. The second method is to use a more general, parameterized approach to duplicate the statistics of the observed crater population [8,9]. In both methods, several factors must be included beyond the simple superposing of circular features: (1) crater erosion by subsequent impacts, (2) infilling of craters by impact ejecta, and (3) crater degradation and era- sure due to the seismic effects of subsequent impacts. Here we present an updated Monte Carlo (stochastic) modeling approach, designed specifically with small- to medium-sized asteroids in mind.
2005-12-30
decade or so, there has been increasing interest in probabilistic, or stochastic, robust control theory . Monte Carlo simulations methods have been...AFRL-VS-PS- AFRL-VS-PS- TR-2005 -1174 TR-2005-1174 STRUCTURAL VIBRATION MODELING & VALIDATION Modeling Uncertainty and Stochastic Control ...for Structural Control Dr. Vít Babuška, Dr. Delano Carter, and Dr. Steven Lane 30 December 2005 Final Report
Stochastic Pseudo-Boolean Optimization
2011-07-31
analysis of two-stage stochastic minimum s-t cut problems; (iv) exact solution algorithm for a class of stochastic bilevel knapsack problems; (v) exact...57 5 Bilevel Knapsack Problems with Stochastic Right-Hand Sides 58 6 Two-Stage Stochastic Assignment Problems 59 6.1 Introduction...programming formulations and related computational complexity issues. • Section 5 considers a specific stochastic extension of the bilevel knapsack
Stochastic self-assembly of incommensurate clusters.
D'Orsogna, M R; Lakatos, G; Chou, T
2012-02-28
Nucleation and molecular aggregation are important processes in numerous physical and biological systems. In many applications, these processes often take place in confined spaces, involving a finite number of particles. Analogous to treatments of stochastic chemical reactions, we examine the classic problem of homogeneous nucleation and self-assembly by deriving and analyzing a fully discrete stochastic master equation. We enumerate the highest probability steady states, and derive exact analytical formulae for quenched and equilibrium mean cluster size distributions. Upon comparison with results obtained from the associated mass-action Becker-Döring equations, we find striking differences between the two corresponding equilibrium mean cluster concentrations. These differences depend primarily on the divisibility of the total available mass by the maximum allowed cluster size, and the remainder. When such mass "incommensurability" arises, a single remainder particle can "emulsify" the system by significantly broadening the equilibrium mean cluster size distribution. This discreteness-induced broadening effect is periodic in the total mass of the system but arises even when the system size is asymptotically large, provided the ratio of the total mass to the maximum cluster size is finite. Ironically, classic mass-action equations are fairly accurate in the coarsening regime, before equilibrium is reached, despite the presence of large stochastic fluctuations found via kinetic Monte-Carlo simulations. Our findings define a new scaling regime in which results from classic mass-action theories are qualitatively inaccurate, even in the limit of large total system size.
Stochastic self-assembly of incommensurate clusters
NASA Astrophysics Data System (ADS)
D'Orsogna, M. R.; Lakatos, G.; Chou, T.
2012-02-01
Nucleation and molecular aggregation are important processes in numerous physical and biological systems. In many applications, these processes often take place in confined spaces, involving a finite number of particles. Analogous to treatments of stochastic chemical reactions, we examine the classic problem of homogeneous nucleation and self-assembly by deriving and analyzing a fully discrete stochastic master equation. We enumerate the highest probability steady states, and derive exact analytical formulae for quenched and equilibrium mean cluster size distributions. Upon comparison with results obtained from the associated mass-action Becker-Döring equations, we find striking differences between the two corresponding equilibrium mean cluster concentrations. These differences depend primarily on the divisibility of the total available mass by the maximum allowed cluster size, and the remainder. When such mass "incommensurability" arises, a single remainder particle can "emulsify" the system by significantly broadening the equilibrium mean cluster size distribution. This discreteness-induced broadening effect is periodic in the total mass of the system but arises even when the system size is asymptotically large, provided the ratio of the total mass to the maximum cluster size is finite. Ironically, classic mass-action equations are fairly accurate in the coarsening regime, before equilibrium is reached, despite the presence of large stochastic fluctuations found via kinetic Monte-Carlo simulations. Our findings define a new scaling regime in which results from classic mass-action theories are qualitatively inaccurate, even in the limit of large total system size.
Research in Stochastic Processes.
1985-09-01
appear. G. Kallianpur, Finitely additive approach to nonlinear filtering, Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to...Nov. 85, in Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to appear. i. Preparation T. Hsing, Extreme value theory for...1507 Carroll, R.J., Spiegelman, C.H., Lan, K.K.G., Bailey , K.T. and Abbott, R.D., Errors in-variables for binary regression models, Aug.82. 1508
Ma Xiang; Zabaras, Nicholas
2010-05-20
A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification
Li, Chunyuan; Stevens, Andrew J.; Chen, Changyou; Pu, Yunchen; Gan, Zhe; Carin, Lawrence
2016-08-10
Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision. Deep neural networks (DNNs) have shown promising performance on this task. Due to the large variability of shapes, accurate recognition relies on good estimates of model uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpretations for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D & 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.
On a full Monte Carlo approach to quantum mechanics
NASA Astrophysics Data System (ADS)
Sellier, J. M.; Dimov, I.
2016-12-01
The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.
Stochastic calculus for uncoupled continuous-time random walks.
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L
2009-06-01
The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy alpha -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.
Stochastic calculus for uncoupled continuous-time random walks
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2009-06-01
The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy α -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.
Classical Perturbation Theory for Monte Carlo Studies of System Reliability
Lewins, Jeffrey D.
2001-03-15
A variational principle for a Markov system allows the derivation of perturbation theory for models of system reliability, with prospects of extension to generalized Markov processes of a wide nature. It is envisaged that Monte Carlo or stochastic simulation will supply the trial functions for such a treatment, which obviates the standard difficulties of direct analog Monte Carlo perturbation studies. The development is given in the specific mode for first- and second-order theory, using an example with known analytical solutions. The adjoint equation is identified with the importance function and a discussion given as to how both the forward and backward (adjoint) fields can be obtained from a single Monte Carlo study, with similar interpretations for the additional functions required by second-order theory. Generalized Markov models with age-dependence are identified as coming into the scope of this perturbation theory.
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA
Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D
2013-01-01
Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.
Stochastic cooling at Fermilab
Marriner, J.
1986-08-01
The topics discussed are the stochastic cooling systems in use at Fermilab and some of the techniques that have been employed to meet the particular requirements of the anti-proton source. Stochastic cooling at Fermilab became of paramount importance about 5 years ago when the anti-proton source group at Fermilab abandoned the electron cooling ring in favor of a high flux anti-proton source which relied solely on stochastic cooling to achieve the phase space densities necessary for colliding proton and anti-proton beams. The Fermilab systems have constituted a substantial advance in the techniques of cooling including: large pickup arrays operating at microwave frequencies, extensive use of cryogenic techniques to reduce thermal noise, super-conducting notch filters, and the development of tools for controlling and for accurately phasing the system.
STOCHASTIC COOLING FOR BUNCHED BEAMS.
BLASKIEWICZ, M.
2005-05-16
Problems associated with bunched beam stochastic cooling are reviewed. A longitudinal stochastic cooling system for RHIC is under construction and has been partially commissioned. The state of the system and future plans are discussed.
Kinetic Monte Carlo models for the study of chemical reactions in the Earth's upper atmosphere
NASA Astrophysics Data System (ADS)
Turchak, L. I.; Shematovich, V. I.
2016-06-01
A stochastic approach to study the non-equilibrium chemistry in the Earth's upper atmosphere is presented, which has been developed over a number of years. Kinetic Monte Carlo models based on this approach are an effective tool for investigating the role of suprathermal particles both in local variations of the atmospheric chemical composition and in the formation of the hot planetary corona.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-02-21
A stochastic model of the processes involved in the measurement of the activity of the (222)Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the (222)Rn decay products concentrations in the air are realistically evaluated.
Chemical application of diffusion quantum Monte Carlo
NASA Technical Reports Server (NTRS)
Reynolds, P. J.; Lester, W. A., Jr.
1984-01-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.
Noncovalent Interactions by Quantum Monte Carlo.
Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr
2016-05-11
Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well.
Markov stochasticity coordinates
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method-termed Markov Stochasticity Coordinates-is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Stochastic demographic forecasting.
Lee, R D
1992-11-01
"This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary.... Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented."
Criticality of spent reactor fuel
Harris, D.R.
1987-01-01
The storage capacity of spent reactor fuel pools can be greatly increased by consolidation. In this process, the fuel rods are removed from reactor fuel assemblies and are stored in close-packed arrays in a canister or skeleton. An earlier study examined criticality consideration for consolidation of Westinghouse fuel, assumed to be fresh, in canisters at the Millstone-2 spent-fuel pool and in the General Electric IF-300 shipping cask. The conclusions were that the fuel rods in the canister are so deficient in water that they are adequately subcritical, both in normal and in off-normal conditions. One potential accident, the water spill event, remained unresolved in the earlier study. A methodology is developed here for spent-fuel criticality and is applied to the water spill event. The methodology utilizes LEOPARD to compute few-group cross sections for the diffusion code PDQ7, which then is used to compute reactivity. These codes give results for fresh fuel that are in good agreement with KENO IV-NITAWL Monte Carlo results, which themselves are in good agreement with continuous energy Monte Carlo calculations. These methodologies are in reasonable agreement with critical measurements for undepleted fuel.
Stochastic analysis of transport in tubes with rough walls
Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu
2006-09-01
Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.
Synchronizing stochastic circadian oscillators in single cells of Neurospora crassa
NASA Astrophysics Data System (ADS)
Deng, Zhaojie; Arsenault, Sam; Caranica, Cristian; Griffith, James; Zhu, Taotao; Al-Omari, Ahmad; Schüttler, Heinz-Bernd; Arnold, Jonathan; Mao, Leidong
2016-10-01
The synchronization of stochastic coupled oscillators is a central problem in physics and an emerging problem in biology, particularly in the context of circadian rhythms. Most measurements on the biological clock are made at the macroscopic level of millions of cells. Here measurements are made on the oscillators in single cells of the model fungal system, Neurospora crassa, with droplet microfluidics and the use of a fluorescent recorder hooked up to a promoter on a clock controlled gene-2 (ccg-2). The oscillators of individual cells are stochastic with a period near 21 hours (h), and using a stochastic clock network ensemble fitted by Markov Chain Monte Carlo implemented on general-purpose graphical processing units (or GPGPUs) we estimated that >94% of the variation in ccg-2 expression was stochastic (as opposed to experimental error). To overcome this stochasticity at the macroscopic level, cells must synchronize their oscillators. Using a classic measure of similarity in cell trajectories within droplets, the intraclass correlation (ICC), the synchronization surface ICC is measured on >25,000 cells as a function of the number of neighboring cells within a droplet and of time. The synchronization surface provides evidence that cells communicate, and synchronization varies with genotype.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650
Synchronizing stochastic circadian oscillators in single cells of Neurospora crassa
Deng, Zhaojie; Arsenault, Sam; Caranica, Cristian; Griffith, James; Zhu, Taotao; Al-Omari, Ahmad; Schüttler, Heinz-Bernd; Arnold, Jonathan; Mao, Leidong
2016-01-01
The synchronization of stochastic coupled oscillators is a central problem in physics and an emerging problem in biology, particularly in the context of circadian rhythms. Most measurements on the biological clock are made at the macroscopic level of millions of cells. Here measurements are made on the oscillators in single cells of the model fungal system, Neurospora crassa, with droplet microfluidics and the use of a fluorescent recorder hooked up to a promoter on a clock controlled gene-2 (ccg-2). The oscillators of individual cells are stochastic with a period near 21 hours (h), and using a stochastic clock network ensemble fitted by Markov Chain Monte Carlo implemented on general-purpose graphical processing units (or GPGPUs) we estimated that >94% of the variation in ccg-2 expression was stochastic (as opposed to experimental error). To overcome this stochasticity at the macroscopic level, cells must synchronize their oscillators. Using a classic measure of similarity in cell trajectories within droplets, the intraclass correlation (ICC), the synchronization surface ICC is measured on >25,000 cells as a function of the number of neighboring cells within a droplet and of time. The synchronization surface provides evidence that cells communicate, and synchronization varies with genotype. PMID:27786253
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.
Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
A Stochastic Multi-Attribute Assessment of Energy Options for Fairbanks, Alaska
NASA Astrophysics Data System (ADS)
Read, L.; Madani, K.; Mokhtari, S.; Hanks, C. L.; Sheets, B.
2012-12-01
Many competing projects have been proposed to address Interior Alaska's high cost of energy—both for electricity production and for heating. Public and private stakeholders are considering the costs associated with these competing projects which vary in fuel source, subsidy requirements, proximity, and other factors. As a result, the current projects under consideration involve a complex cost structure of potential subsidies and reliance on present and future market prices, introducing a significant amount of uncertainty associated with each selection. Multi-criteria multi-decision making (MCMDM) problems of this nature can benefit from game theory and systems engineering methods, which account for behavior and preferences of stakeholders in the analysis to produce feasible and relevant solutions. This work uses a stochastic MCMDM framework to evaluate the trade-offs of each proposed project based on a complete cost analysis, environmental impact, and long-term sustainability. Uncertainty in the model is quantified via a Monte Carlo analysis, which helps characterize the sensitivity and risk associated with each project. Based on performance measures and criteria outlined by the stakeholders, a decision matrix will inform policy on selecting a project that is both efficient and preferred by the constituents.
An Overview of the Monte Carlo Application ToolKit (MCATK)
Trahan, Travis John
2016-01-07
MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.
Analysis of bilinear stochastic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Martin, D. N.; Marcus, S. I.
1975-01-01
Analysis of stochastic dynamical systems that involve multiplicative (bilinear) noise processes. After defining the systems of interest, consideration is given to the evolution of the moments of such systems, the question of stochastic stability, and estimation for bilinear stochastic systems. Both exact and approximate methods of analysis are introduced, and, in particular, the uses of Lie-theoretic concepts and harmonic analysis are discussed.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations
Sensitivity of footbridge vibrations to stochastic walking parameters
NASA Astrophysics Data System (ADS)
Pedersen, Lars; Frier, Christian
2010-06-01
Some footbridges are so slender that pedestrian traffic can cause excessive vibrations and serviceability problems. Design guidelines outline procedures for vibration serviceability checks, but it is noticeable that they rely on the assumption that the action is deterministic, although in fact it is stochastic as different pedestrians generate different dynamic forces. For serviceability checks of footbridge designs it would seem reasonable to consider modelling the stochastic nature of the main parameters describing the excitation, such as for instance the load amplitude and the step frequency of the pedestrian. A stochastic modelling approach is adopted for this paper and it facilitates quantifying the probability of exceeding various vibration levels, which is useful in a discussion of serviceability of a footbridge design. However, estimates of statistical distributions of footbridge vibration levels to walking loads might be influenced by the models assumed for the parameters of the load model (the walking parameters). The paper explores how sensitive estimates of the statistical distribution of vertical footbridge response are to various stochastic assumptions for the walking parameters. The basis for the study is a literature review identifying different suggestions as to how the stochastic nature of these parameters may be modelled, and a parameter study examines how the different models influence estimates of the statistical distribution of footbridge vibrations. By neglecting scatter in some of the walking parameters, the significance of modelling the various walking parameters stochastically rather than deterministically is also investigated providing insight into which modelling efforts need to be made for arriving at reliable estimates of statistical distributions of footbridge vibrations. The studies for the paper are based on numerical simulations of footbridge responses and on the use of Monte Carlo simulations for modelling the stochastic nature of
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
Stochastic decentralized systems
NASA Astrophysics Data System (ADS)
Barfoot, Timothy David
Fundamental aspects of decentralized systems are considered from a control perspective. The stochastic framework afforded by Markov systems is presented as a formal setting in which to study decentralized systems. A stochastic algebra is introduced which allows Markov systems to be considered in matrix format but also strikes an important connection to the classic linear system originally studied by Kalman [1960]. The process of decentralization is shown to impose constraints on observability and controllability of a system. However, it is argued that communicating decentralized controllers can implement any control law possible with a centralized controller. Communication is shown to serve a dual role, both enabling sensor data to be shared and actions to be coordinated. The viabilities of these two types of communication are tested on a real network of mobile robots where they are found to be successful at a variety of tasks. Action coordination is reframed as a decentralized decision making process whereupon stochastic cellular automata (SCA) are introduced as a model. Through studies of SCA it is found that coordination in a group of arbitrarily and sparsely connected agents is possible using simple rules. The resulting stochastic mechanism may be immediately used as a practical decentralized decision making tool (it is tested on a group of mobile robots) but, it furthermore provides insight into the general features of self-organizing systems.
Tollestrup, A.V.; Dugan, G
1983-12-01
Major headings in this review include: proton sources; antiproton production; antiproton sources and Liouville, the role of the Debuncher; transverse stochastic cooling, time domain; the accumulator; frequency domain; pickups and kickers; Fokker-Planck equation; calculation of constants in the Fokker-Planck equation; and beam feedback. (GHT)
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Stochastic computing with biomolecular automata.
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-07-06
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure.
Chemical application of diffusion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Reynolds, P. J.; Lester, W. A., Jr.
1983-10-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.
Phylogenetic Stochastic Mapping Without Matrix Exponentiation
Irvahn, Jan; Minin, Vladimir N.
2014-01-01
Abstract Phylogenetic stochastic mapping is a method for reconstructing the history of trait changes on a phylogenetic tree relating species/organism carrying the trait. State-of-the-art methods assume that the trait evolves according to a continuous-time Markov chain (CTMC) and works well for small state spaces. The computations slow down considerably for larger state spaces (e.g., space of codons), because current methodology relies on exponentiating CTMC infinitesimal rate matrices—an operation whose computational complexity grows as the size of the CTMC state space cubed. In this work, we introduce a new approach, based on a CTMC technique called uniformization, which does not use matrix exponentiation for phylogenetic stochastic mapping. Our method is based on a new Markov chain Monte Carlo (MCMC) algorithm that targets the distribution of trait histories conditional on the trait data observed at the tips of the tree. The computational complexity of our MCMC method grows as the size of the CTMC state space squared. Moreover, in contrast to competing matrix exponentiation methods, if the rate matrix is sparse, we can leverage this sparsity and increase the computational efficiency of our algorithm further. Using simulated data, we illustrate advantages of our MCMC algorithm and investigate how large the state space needs to be for our method to outperform matrix exponentiation approaches. We show that even on the moderately large state space of codons our MCMC method can be significantly faster than currently used matrix exponentiation methods. PMID:24918812
Linear-scaling and parallelisable algorithms for stochastic quantum chemistry
NASA Astrophysics Data System (ADS)
Booth, George H.; Smart, Simon D.; Alavi, Ali
2014-07-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.
Stochastic lag time in nucleated linear self-assembly
NASA Astrophysics Data System (ADS)
Tiwari, Nitin S.; van der Schoot, Paul
2016-06-01
Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway.
Stochastic multiscale modeling of polycrystalline materials
NASA Astrophysics Data System (ADS)
Wen, Bin
Mechanical properties of engineering materials are sensitive to the underlying random microstructure. Quantification of mechanical property variability induced by microstructure variation is essential for the prediction of extreme properties and microstructure-sensitive design of materials. Recent advances in high throughput characterization of polycrystalline microstructures have resulted in huge data sets of microstructural descriptors and image snapshots. To utilize these large scale experimental data for computing the resulting variability of macroscopic properties, appropriate mathematical representation of microstructures is needed. By exploring the space containing all admissible microstructures that are statistically similar to the available data, one can estimate the distribution/envelope of possible properties by employing efficient stochastic simulation methodologies along with robust physics-based deterministic simulators. The focus of this thesis is on the construction of low-dimensional representations of random microstructures and the development of efficient physics-based simulators for polycrystalline materials. By adopting appropriate stochastic methods, such as Monte Carlo and Adaptive Sparse Grid Collocation methods, the variability of microstructure-sensitive properties of polycrystalline materials is investigated. The primary outcomes of this thesis include: (1) Development of data-driven reduced-order representations of microstructure variations to construct the admissible space of random polycrystalline microstructures. (2) Development of accurate and efficient physics-based simulators for the estimation of material properties based on mesoscale microstructures. (3) Investigating property variability of polycrystalline materials using efficient stochastic simulation methods in combination with the above two developments. The uncertainty quantification framework developed in this work integrates information science and materials science, and
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
A User’s Manual for MASH 1.0 - A Monte Carlo Adjoint Shielding Code System
1992-03-01
INTRODUCTION TO MORSE The Multigroup Oak Ridge Stochastic Experiment code (MORSE)’ is a multipurpose neutron and gamma-ray transport Monte Carlo code...in the energy transfer process. Thus, these multigroup cross sections have the same format for both neutrons and gamma rays. In addition, the... multigroup cross sections in a Monte Carlo code means that the effort required to produce cross-section libraries is reduced. Coupled neutron gamma-ray cross
1998-03-01
Fossil fuels -- coal, oil, and natural gas -- built America`s historic economic strength. Today, coal supplies more than 55% of the electricity, oil more than 97% of the transportation needs, and natural gas 24% of the primary energy used in the US. Even taking into account increased use of renewable fuels and vastly improved powerplant efficiencies, 90% of national energy needs will still be met by fossil fuels in 2020. If advanced technologies that boost efficiency and environmental performance can be successfully developed and deployed, the US can continue to depend upon its rich resources of fossil fuels.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
Stochastic ice stream dynamics
NASA Astrophysics Data System (ADS)
Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca
2016-08-01
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.
Stochastic ice stream dynamics
Bertagni, Matteo Bernard; Ridolfi, Luca
2016-01-01
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution. PMID:27457960
BLASKIEWICZ,M.BRENNAN,J.M.CAMERON,P.WEI,J.
2003-05-12
Emittance growth due to Intra-Beam Scattering significantly reduces the heavy ion luminosity lifetime in RHIC. Stochastic cooling of the stored beam could improve things considerably by counteracting IBS and preventing particles from escaping the rf bucket [1]. High frequency bunched-beam stochastic cooling is especially challenging but observations of Schottky signals in the 4-8 GHz band indicate that conditions are favorable in RHIC [2]. We report here on measurements of the longitudinal beam transfer function carried out with a pickup kicker pair on loan from FNAL TEVATRON. Results imply that for ions a coasting beam description is applicable and we outline some general features of a viable momentum cooling system for RHIC.
Stochastic Modelling of Shallow Water Flows
NASA Astrophysics Data System (ADS)
Horritt, M. S.
2002-05-01
The application of computational fluid dynamics approaches to modelling shallow water flows in the environment is hindered by the uncertainty inherent to natural landforms, vegetation and processes. A stochastic approach to modelling is therefore required, but this has previously only been attempted through computationally intensive Monte Carlo methods. An efficient second order perturbation method is outlined in this presentation, whereby the governing equations are first discretised to form a non-linear system mapping model parameters to predictions. This system is then approximated using Taylor expansions to derive tractable expressions for the model prediction statistics. The approach is tested on a simple 1-D model of shallow water flow over uncertain topography, verified against ensembles of Monte Carlo simulations and approximate solutions derived by Fourier methods. Criteria for the applicability of increasing orders of Taylor expansions are derived as a function of flow depth and topographic variability. The results show that non-linear effects are important for even small topographic perturbations, and the second order perturbation method is required to derive model prediction statistics. This approximation holds well even as the flow depth tends towards the topographic roughness. The model predicted statistics are also well described by a Gaussian approximation, so only first and second moments need be calculated, even if these are significantly different to values predicted by a linear approximation. The implications for more sophisticated (2-D, advective etc.) models are discussed.
Methodology for Stochastic Modeling.
1985-01-01
AD-AISS 851 METHODOLOGY FOR STOCHASTIC MODELING(U) ARMY MATERIEL 11 SYSTEMS ANALYSIS ACTIYITY ABERDEEN PROVING GROUND MD H E COHEN JAN 95 RNSAA-TR-41...FORM T REPORT NUMBER 2. GOVT ACCESSION NO. 3. RECIPIENT’$ CATALOG NUMBER 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Methodology for...autoregression models, moving average models, ARMA, adaptive modeling, covariance methods , singular value decom- position, order determination rational
Dorogovtsev, Andrei A
2010-06-29
For sets in a Hilbert space the concept of quadratic entropy is introduced. It is shown that this entropy is finite for the range of a stochastic flow of Brownian particles on R. This implies, in particular, the fact that the total time of the free travel in the Arratia flow of all particles that started from a bounded interval is finite. Bibliography: 10 titles.
Stochastic Thermodynamics of Learning
NASA Astrophysics Data System (ADS)
Goldt, Sebastian; Seifert, Udo
2017-01-01
Virtually every organism gathers information about its noisy environment and builds models from those data, mostly using neural networks. Here, we use stochastic thermodynamics to analyze the learning of a classification rule by a neural network. We show that the information acquired by the network is bounded by the thermodynamic cost of learning and introduce a learning efficiency η ≤1 . We discuss the conditions for optimal learning and analyze Hebbian learning in the thermodynamic limit.
NASA Astrophysics Data System (ADS)
Holmes-Cerfon, Miranda
2016-11-01
We study a model of rolling particles subject to stochastic fluctuations, which may be relevant in systems of nano- or microscale particles where rolling is an approximation for strong static friction. We consider the simplest possible nontrivial system: a linear polymer of three disks constrained to remain in contact and immersed in an equilibrium heat bath so the internal angle of the polymer changes due to stochastic fluctuations. We compare two cases: one where the disks can slide relative to each other and the other where they are constrained to roll, like gears. Starting from the Langevin equations with arbitrary linear velocity constraints, we use formal homogenization theory to derive the overdamped equations that describe the process in configuration space only. The resulting dynamics have the formal structure of a Brownian motion on a Riemannian or sub-Riemannian manifold, depending on if the velocity constraints are holonomic or nonholonomic. We use this to compute the trimer's equilibrium distribution with and without the rolling constraints. Surprisingly, the two distributions are different. We suggest two possible interpretations of this result: either (i) dry friction (or other dissipative, nonequilibrium forces) changes basic thermodynamic quantities like the free energy of a system, a statement that could be tested experimentally, or (ii) as a lesson in modeling rolling or friction more generally as a velocity constraint when stochastic fluctuations are present. In the latter case, we speculate there could be a "roughness" entropy whose inclusion as an effective force could compensate the constraint and preserve classical Boltzmann statistics. Regardless of the interpretation, our calculation shows the word "rolling" must be used with care when stochastic fluctuations are present.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Stochastic Quantization of Instantons
NASA Astrophysics Data System (ADS)
Grandati, Y.; Bérard, A.; Grangé, P.
1996-03-01
The method of Parisi and Wu to quantize classical fields is applied to instanton solutionsϕIof euclidian non-linear theory in one dimension. The solutionϕεof the corresponding Langevin equation is built through a singular perturbative expansion inε=ℏ1/2in the frame of the center of mass of the instanton, where the differenceϕε-ϕIcarries only fluctuations of the instanton form. The relevance of the method is shown for the stochasticK dVequation with uniform noise in space: the exact solution usually obtained by the inverse scattering method is retrieved easily by the singular expansion. A general diagrammatic representation of the solution is then established which makes a thorough use of regrouping properties of stochastic diagrams derived in scalar field theory. Averaging over the noise and in the limit of infinite stochastic time, we obtain explicit expressions for the first two orders inεof the perturbed instanton and of its Green function. Specializing to the Sine-Gordon andϕ4models, the first anharmonic correction is obtained analytically. The calculation is carried to second order for theϕ4model, showing good convergence.
Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs
Infanger, G.
1993-11-01
The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.
Markov chain Monte Carlo method without detailed balance.
Suwa, Hidemaro; Todo, Synge
2010-09-17
We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.
Electronic structure quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Bajdich, Michal; Mitas, Lubos
2009-04-01
Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with
Stochastic techno-economic evaluation of cellulosic biofuel pathways.
Zhao, Xin; Brown, Tristan R; Tyner, Wallace E
2015-12-01
This study evaluates the economic feasibility and stochastic dominance rank of eight cellulosic biofuel production pathways (including gasification, pyrolysis, liquefaction, and fermentation) under technological and economic uncertainty. A techno-economic assessment based financial analysis is employed to derive net present values and breakeven prices for each pathway. Uncertainty is investigated and incorporated into fuel prices and techno-economic variables: capital cost, conversion technology yield, hydrogen cost, natural gas price and feedstock cost using @Risk, a Palisade Corporation software. The results indicate that none of the eight pathways would be profitable at expected values under projected energy prices. Fast pyrolysis and hydroprocessing (FPH) has the lowest breakeven fuel price at 3.11$/gallon of gasoline equivalent (0.82$/liter of gasoline equivalent). With the projected energy prices, FPH investors could expect a 59% probability of loss. Stochastic dominance is done based on return on investment. Most risk-averse decision makers would prefer FPH to other pathways.
Stochastic image reconstruction for a dual-particle imaging system
NASA Astrophysics Data System (ADS)
Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.
2016-02-01
Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Key parameter optimization and analysis of stochastic seismic inversion
NASA Astrophysics Data System (ADS)
Huang, Zhe-Yuan; Gan, Li-Deng; Dai, Xiao-Feng; Li, Ling-Gao; Wang, Jun
2012-03-01
Stochastic seismic inversion is the combination of geostatistics and seismic inversion technology which integrates information from seismic records, well logs, and geostatistics into a posterior probability density function (PDF) of subsurface models. The Markov chain Monte Carlo (MCMC) method is used to sample the posterior PDF and the subsurface model characteristics can be inferred by analyzing a set of the posterior PDF samples. In this paper, we first introduce the stochastic seismic inversion theory, discuss and analyze the four key parameters: seismic data signal-to-noise ratio (S/N), variogram, the posterior PDF sample number, and well density, and propose the optimum selection of these parameters. The analysis results show that seismic data S/N adjusts the compromise between the influence of the seismic data and geostatistics on the inversion results, the variogram controls the smoothness of the inversion results, the posterior PDF sample number determines the reliability of the statistical characteristics derived from the samples, and well density influences the inversion uncertainty. Finally, the comparison between the stochastic seismic inversion and the deterministic model based seismic inversion indicates that the stochastic seismic inversion can provide more reliable information of the subsurface character.
Fast and efficient stochastic optimization for analytic continuation
NASA Astrophysics Data System (ADS)
Bao, F.; Tang, Y.; Summers, M.; Zhang, G.; Webster, C.; Scarola, V.; Maier, T. A.
2016-09-01
The analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000), 10.1103/PhysRevB.62.6317], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. We generally find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. We therefore believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; Tang, Yanfei; Scarola, Vito; Summers, Michael Stuart; Maier, Thomas A
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less
A rigorous framework for multiscale simulation of stochastic cellular networks
Chevalier, Michael W.; El-Samad, Hana
2009-01-01
Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-cell variability even in clonal populations. Stochastic biochemical networks are modeled as continuous time discrete state Markov processes whose probability density functions evolve according to a chemical master equation (CME). The CME is not solvable but for the simplest cases, and one has to resort to kinetic Monte Carlo techniques to simulate the stochastic trajectories of the biochemical network under study. A commonly used such algorithm is the stochastic simulation algorithm (SSA). Because it tracks every biochemical reaction that occurs in a given system, the SSA presents computational difficulties especially when there is a vast disparity in the timescales of the reactions or in the number of molecules involved in these reactions. This is common in cellular networks, and many approximation algorithms have evolved to alleviate the computational burdens of the SSA. Here, we present a rigorously derived modified CME framework based on the partition of a biochemically reacting system into restricted and unrestricted reactions. Although this modified CME decomposition is as analytically difficult as the original CME, it can be naturally used to generate a hierarchy of approximations at different levels of accuracy. Most importantly, some previously derived algorithms are demonstrated to be limiting cases of our formulation. We apply our methods to biologically relevant test systems to demonstrate their accuracy and efficiency. PMID:19673546
Stochastic optimization of multireservoir systems via reinforcement learning
NASA Astrophysics Data System (ADS)
Lee, Jin-Hee; Labadie, John W.
2007-11-01
Although several variants of stochastic dynamic programming have been applied to optimal operation of multireservoir systems, they have been plagued by a high-dimensional state space and the inability to accurately incorporate the stochastic environment as characterized by temporally and spatially correlated hydrologic inflows. Reinforcement learning has emerged as an effective approach to solving sequential decision problems by combining concepts from artificial intelligence, cognitive science, and operations research. A reinforcement learning system has a mathematical foundation similar to dynamic programming and Markov decision processes, with the goal of maximizing the long-term reward or returns as conditioned on the state of the system environment and the immediate reward obtained from operational decisions. Reinforcement learning can include Monte Carlo simulation where transition probabilities and rewards are not explicitly known a priori. The Q-Learning method in reinforcement learning is demonstrated on the two-reservoir Geum River system, South Korea, and is shown to outperform implicit stochastic dynamic programming and sampling stochastic dynamic programming methods.
Multi-scenario modelling of uncertainty in stochastic chemical systems
Evans, R. David; Ricardez-Sandoval, Luis A.
2014-09-15
Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.
Stochastic Optimal Scheduling of Residential Appliances with Renewable Energy Sources
Wu, Hongyu; Pratt, Annabelle; Chakraborty, Sudipta
2015-07-03
This paper proposes a stochastic, multi-objective optimization model within a Model Predictive Control (MPC) framework, to determine the optimal operational schedules of residential appliances operating in the presence of renewable energy source (RES). The objective function minimizes the weighted sum of discomfort, energy cost, total and peak electricity consumption, and carbon footprint. A heuristic method is developed for combining different objective components. The proposed stochastic model utilizes Monte Carlo simulation (MCS) for representing uncertainties in electricity price, outdoor temperature, RES generation, water usage, and non-controllable loads. The proposed model is solved using a mixed integer linear programming (MILP) solver and numerical results show the validity of the model. Case studies show the benefit of using the proposed optimization model.
Semiparametric Stochastic Modeling of the Rate Function in Longitudinal Studies
Zhu, Bin; Taylor, Jeremy M.G.; Song, Peter X.-K.
2011-01-01
In longitudinal biomedical studies, there is often interest in the rate functions, which describe the functional rates of change of biomarker profiles. This paper proposes a semiparametric approach to model these functions as the realizations of stochastic processes defined by stochastic differential equations. These processes are dependent on the covariates of interest and vary around a specified parametric function. An efficient Markov chain Monte Carlo algorithm is developed for inference. The proposed method is compared with several existing methods in terms of goodness-of-fit and more importantly the ability to forecast future functional data in a simulation study. The proposed methodology is applied to prostate-specific antigen profiles for illustration. Supplementary materials for this paper are available online. PMID:22423170
System Design Support by Optimization Method Using Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
We proposed the new optimization method based on stochastic process. The characteristics of this method are to obtain the approximate solution of the optimum solution as an expected value. In numerical calculation, a kind of Monte Carlo method is used to obtain the solution because of stochastic process. Then, it can obtain the probability distribution of the design variable because it is generated in the probability that design variables were in proportion to the evaluation function value. This probability distribution shows the influence of design variables on the evaluation function value. This probability distribution is the information which is very useful for the system design. In this paper, it is shown the proposed method is useful for not only the optimization but also the system design. The flight trajectory optimization problem for the hang-glider is shown as an example of the numerical calculation.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
Monte Carlo fluorescence microtomography
NASA Astrophysics Data System (ADS)
Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge
2011-07-01
Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.
Stochastic analysis of complex reaction networks using binomial moment equations.
Barzel, Baruch; Biham, Ofer
2012-09-01
The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.
Stochastic Physicochemical Dynamics
NASA Astrophysics Data System (ADS)
Tsekov, R.
2001-02-01
Thermodynamic Relaxation in Quantum Systems: A new approach to quantum Markov processes is developed and the corresponding Fokker-Planck equation is derived. The latter is examined to reproduce known results from classical and quantum physics. It was also applied to the phase-space description of a mechanical system thus leading to a new treatment of this problem different from the Wigner presentation. The equilibrium probability density obtained in the mixed coordinate-momentum space is a reasonable extension of the Gibbs canonical distribution. The validity of the Einstein fluctuation-dissipation relation is discussed in respect to the type of relaxation in an isothermal system. The first model, presuming isothermic fluctuations, leads to the Einstein formula. The second model supposes adiabatic fluctuations and yields another relation between the diffusion coefficient and mobility of a Brownian particle. A new approach to relaxations in quantum systems is also proposed that demonstrates applicability only of the adiabatic model for description of the quantum Brownian dynamics. Stochastic Dynamics of Gas Molecules: A stochastic Langevin equation is derived, describing the thermal motion of a molecule immersed in a rested fluid of identical molecules. The fluctuation-dissipation theorem is proved and a number of correlation characteristics of the molecular Brownian motion are obtained. A short review of the classical theory of Brownian motion is presented. A new method is proposed for derivation of the Fokker-Planck equations, describing the probability density evolution, from stochastic differential equations. It is also proven via the central limit theorem that the white noise is only Gaussian. The applicability of stochastic differential equations to thermodynamics is considered and a new form, different from the classical Ito and Stratonovich forms, is introduced. It is shown that the new presentation is more appropriate for the description of thermodynamic
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Renormalization group and perfect operators for stochastic differential equations.
Hou, Q; Goldenfeld, N; McKane, A
2001-03-01
We develop renormalization group (RG) methods for solving partial and stochastic differential equations on coarse meshes. RG transformations are used to calculate the precise effect of small-scale dynamics on the dynamics at the mesh size. The fixed point of these transformations yields a perfect operator: an exact representation of physical observables on the mesh scale with minimal lattice artifacts. We apply the formalism to simple nonlinear models of critical dynamics, and show how the method leads to an improvement in the computational performance of Monte Carlo methods.
Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems
NASA Astrophysics Data System (ADS)
Endo, Eishin; Toga, Yuta; Sasaki, Munetaka
2015-07-01
We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.
Stochastic FDTD accuracy improvement through correlation coefficient estimation
NASA Astrophysics Data System (ADS)
Masumnia Bisheh, Khadijeh; Zakeri Gatabi, Bijan; Andargoli, Seyed Mehdi Hosseini
2015-04-01
This paper introduces a new scheme to improve the accuracy of the stochastic finite difference time domain (S-FDTD) method. S-FDTD, reported recently by Smith and Furse, calculates the variations in the electromagnetic fields caused by variability or uncertainty in the electrical properties of the materials in the model. The accuracy of the S-FDTD method is controlled by the approximations for correlation coefficients between the electrical properties of the materials in the model and the fields propagating in them. In this paper, new approximations for these correlation coefficients are obtained using Monte Carlo method with a small number of runs, terming them as Monte Carlo correlation coefficients (MC-CC). Numerical results for two bioelectromagnetic simulation examples demonstrate that MC-CC can improve the accuracy of the S-FDTD method and yield more accurate results than previous approximations.
Stochastic thermodynamics of resetting
NASA Astrophysics Data System (ADS)
Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo
2016-03-01
Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.
Stochastic ontogenetic growth model
NASA Astrophysics Data System (ADS)
West, B. J.; West, D.
2012-02-01
An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.
Stochastic processes in cosmology
NASA Astrophysics Data System (ADS)
Cáceres, Manuel O.; Diaz, Mario C.; Pullin, Jorge A.
1987-08-01
The behavior of a radiation filled de Sitter universe in which the equation of state is perturbed by a stochastic term is studied. The corresponding two-dimensional Fokker-Planck equation is solved. The finiteness of the cosmological constant appears to be a necessary condition for the stability of the model which undergoes an exponentially expanding state. Present address: Facultad de Matemática Astronomía y Física, Universidad Nacional de Córdoba, Laprida 854, 5000 Códoba, Argentina.
NASA Astrophysics Data System (ADS)
Hairer, Martin
2006-03-01
We consider a class of parabolic stochastic PDEs driven by white noise in time, and we are interested in showing ergodicity for some cases where the noise is degenerate, i.e., acts only on part of the equation. In some cases where the standard Strong Feller / Irreducibility argument fails, one can nevertheless implement a coupling construction that ensures uniqueness of the invariant measure. We focus on the example of the complex Ginzburg-Landau equation driven by real space-time white noise.
Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.
Stochastic simulation of transport phenomena
Wedgewood, L.E.; Geurts, K.R.
1995-10-01
In this paper, four examples are given to demonstrate how stochastic simulations can be used as a method to obtain numerical solutions to transport problems. The problems considered are two-dimensional heat conduction, mass diffusion with reaction, the start-up of Poiseuille flow, and Couette flow of a suspension of Hookean dumbbells. The first three examples are standard problems with well-known analytic solutions which can be used to verify the results of the stochastic simulation. The fourth example combines a Brownian dynamics simulation for Hookean dumbbells, a crude model of a dilute polymer suspension, and a stochastic simulation for the suspending, Newtonian fluid. These examples illustrate appropriate methods for handling source/sink terms and initial and boundary conditions. The stochastic simulation results compare well with the analytic solutions and other numerical solutions. The goal of this paper is to demonstrate the wide applicability of stochastic simulation as a numerical method for transport problems.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Alternative fuels include gaseous fuels such as hydrogen, natural gas, and propane; alcohols such as ethanol, methanol, and butanol; vegetable and waste-derived oils; and electricity. Overview of alternative fuels is here.
Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids
Donev, A; Alder, B J; Garcia, A L
2008-02-26
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-387, 10 June 2003
This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.
Christiansen, David W.; Karnesky, Richard A.; Leggett, Robert D.; Baker, Ronald B.
1989-01-01
A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.
Christiansen, D.W.; Karnesky, R.A.; Leggett, R.D.; Baker, R.B.
1987-11-24
A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.
Christiansen, David W.; Karnesky, Richard A.; Leggett, Robert D.; Baker, Ronald B.
1989-10-03
A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.
Stochastic Evaluation of Riparian Vegetation Dynamics in River Channels
NASA Astrophysics Data System (ADS)
Miyamoto, H.; Kimura, R.; Toshimori, N.
2013-12-01
Vegetation overgrowth in sand bars and floodplains has been a serious problem for river management in Japan. From the viewpoints of flood control and ecological conservation, it would be necessary to accurately predict the vegetation dynamics for a long period of time. In this study, we have developed a stochastic model for predicting the dynamics of trees in floodplains with emphasis on the interaction with flood impacts. The model consists of the following four processes in coupling ecohydrology with biogeomorphology: (i) stochastic behavior of flow discharge, (ii) hydrodynamics in a channel with vegetation, (iii) variation of riverbed topography and (iv) vegetation dynamics on the floodplain. In the model, the flood discharge is stochastically simulated using a Poisson process, one of the conventional approaches in hydrological time-series generation. The model for vegetation dynamics includes the effects of tree growth, mortality by flood impacts, and infant tree invasion. To determine the model parameters, vegetation conditions have been observed mainly before and after flood impacts since 2008 at a field site located between 23.2-24.0 km from the river mouth in Kako River, Japan. This site is one of the vegetation overgrowth locations in Kako River floodplains, where the predominant tree species are willows and bamboos. In this presentation, sensitivity of the vegetation overgrowth tendency is investigated in Kako River channels. Through the Monte Carlo simulation for several cross sections in Kako River, responses of the vegetated channels are stochastically evaluated in terms of the changes of discharge magnitude and channel geomorphology. The expectation and standard deviation of vegetation areal ratio are compared in the different channel cross sections for different river discharges and relative floodplain heights. The result shows that the vegetation status changes sensitively in the channels with larger discharge and insensitive in the lower floodplain
Bunched beam stochastic cooling
Wei, Jie.
1992-01-01
The scaling laws for bunched-beam stochastic cooling has been derived in terms of the optimum cooling rate and the mixing condition. In the case that particles occupy the entire sinusoidal rf bucket, the optimum cooling rate of the bunched beam is shown to be similar to that predicted from the coasting-beam theory using a beam of the same average density and mixing factor. However, in the case that particles occupy only the center of the bucket, the optimum rate decrease in proportion to the ratio of the bunch area to the bucket area. The cooling efficiency can be significantly improved if the synchrotron side-band spectrum is effectively broadened, e.g. by the transverse tune spread or by using a double rf system.
Bunched beam stochastic cooling
Wei, Jie
1992-09-01
The scaling laws for bunched-beam stochastic cooling has been derived in terms of the optimum cooling rate and the mixing condition. In the case that particles occupy the entire sinusoidal rf bucket, the optimum cooling rate of the bunched beam is shown to be similar to that predicted from the coasting-beam theory using a beam of the same average density and mixing factor. However, in the case that particles occupy only the center of the bucket, the optimum rate decrease in proportion to the ratio of the bunch area to the bucket area. The cooling efficiency can be significantly improved if the synchrotron side-band spectrum is effectively broadened, e.g. by the transverse tune spread or by using a double rf system.
Investigation of stochastic radiation transport methods in random heterogeneous mixtures
NASA Astrophysics Data System (ADS)
Reinert, Dustin Ray
Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing
Bellis, P.D.; Nesselrode, F.
1991-04-16
This patent describes a fuel pump. It includes: a fuel reservoir member, the fuel reservoir member being formed with fuel chambers, the chambers comprising an inlet chamber and an outlet chamber, means to supply fuel to the inlet chamber, means to deliver fuel from the outlet chamber to a point of use, the fuel reservoir member chambers also including a bypass chamber, means interconnecting the bypass chamber with the outlet chamber; the fuel pump also comprising pump means interconnecting the inlet chamber and the outlet chamber and adapted to suck fuel from the fuel supply means into the inlet chamber, through the pump means, out the outlet chamber, and to the fuel delivery means; the bypass chamber and the pump means providing two substantially separate paths of fuel flow in the fuel reservoir member, bypass plunger means normally closing off the flow of fuel through the bypass chamber one of the substantially separate paths including the fuel supply means and the fuel delivery means when the bypass plunger means is closed, the second of the substantially separate paths including the bypass chamber when the bypass plunger means is open, and all of the chambers and the interconnecting means therebetween being configured so as to create turbulence in the flow of any fuel supplied to the outlet chamber by the pump means and bypassed through the bypass chamber and the interconnecting means.
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Stochastic reinforcement benefits skill acquisition.
Dayan, Eran; Averbeck, Bruno B; Richmond, Barry J; Cohen, Leonardo G
2014-02-14
Learning complex skills is driven by reinforcement, which facilitates both online within-session gains and retention of the acquired skills. Yet, in ecologically relevant situations, skills are often acquired when mapping between actions and rewarding outcomes is unknown to the learning agent, resulting in reinforcement schedules of a stochastic nature. Here we trained subjects on a visuomotor learning task, comparing reinforcement schedules with higher, lower, or no stochasticity. Training under higher levels of stochastic reinforcement benefited skill acquisition, enhancing both online gains and long-term retention. These findings indicate that the enhancing effects of reinforcement on skill acquisition depend on reinforcement schedules.
Novel Quantum Monte Carlo Approaches for Quantum Liquids
NASA Astrophysics Data System (ADS)
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
Noise-induced instability in self-consistent Monte Carlo calculations
Lemons, D.S.; Lackman, J.; Jones, M.E.; Winske, D.
1995-12-01
We identify, analyze, and propose remedies for a numerical instability responsible for the growth or decay of sums that should be conserved in Monte Carlo simulations of stochastically interacting particles. ``Noisy`` sums with fluctuations proportional to 1/ {radical}{ital n} , where {ital n} is the number of particles in the simulation, provide feedback that drives the instability. Numerical illustrations of an energy loss or ``cooling`` instability in an Ornstein-Uhlenbeck process support our analysis. (c) 1995 The American Physical Society
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ray, Laura R.
1991-01-01
A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.
A probabilistic graphical model approach to stochastic multiscale partial differential equations
Wan, Jiang; Zabaras, Nicholas
2013-10-01
We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em
2013-03-01
We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a ‘deterministic part’ and a ‘stochastic part’. Numerical results verify the Stratonovich–Euler and Ito–Euler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.
Essays on the Bayesian estimation of stochastic cost frontier
NASA Astrophysics Data System (ADS)
Zhao, Xia
This dissertation consists of three essays that focus on a Bayesian estimation of stochastic cost frontiers for electric generation plants. This research gives insight into the changing development of the electric generation market and could serve to inform both private investment and public policy decisions. The main contributions to the growing literature on stochastic cost frontier analysis are to (1) Empirically estimate the possible efficiency gain of power plants due to deregulation. (2) Estimate the cost of electric power generating plants using coal as a fuel taking into account both regularity restrictions and sulfur dioxide emissions. (3) Compare costs of plants using coal to those who use natural gas. (4) Apply the Bayesian stochastic frontier model to estimate a single cost frontier and allow firm type to vary across regulated and deregulated plants. The average group efficiency for two different types of plants is estimated. (5) Use a fixed effects and random effects model on an unbalanced panel to estimated group efficiency for regulated and deregulated plants. The first essay focuses on the possible efficiency gain of 136 U.S. electric power generation coal-fired plants in 1996. Results favor the constrained model over the unconstrained model. SO2 is also included in the model to provide more accurate estimates of plant efficiency and returns to scale. The second essay focuses on the predicted costs and returns to scale of coal generation to natural gas generation at plants where the cost of both fuels could be obtained. It is found that, for power plants switching fuel from natural gas to coal in 1996, on average, the expected fuel cost would fall and returns to scale would increase. The third essay first uses pooled unbalanced panel data to analyze the differences in plant efficiency across plant types---regulated and deregulated. The application of a Bayesian stochastic frontier model enables us to apply different mean plant inefficiency terms by
Statistical validation of stochastic models
Hunter, N.F.; Barney, P.; Paez, T.L.; Ferregut, C.; Perez, L.
1996-12-31
It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.
Adaptive and Optimal Control of Stochastic Dynamical Systems
2015-09-14
control and stochastic differential games . Stochastic linear-quadratic, continuous time, stochastic control problems are solved for systems with noise...control problems for systems with arbitrary correlated n 15. SUBJECT TERMS Adaptive control, optimal control, stochastic differential games 16. SECURITY...explicit results have been obtained for problems of stochastic control and stochastic differential games . Stochastic linear- quadratic, continuous time
The Stochastic Gradient Approximation: An application to lithium nanoclusters
NASA Astrophysics Data System (ADS)
Nissenbaum, Daniel
The Stochastic Gradient Approximation (SGA) is the natural extension of Quantum Monte Carlo (QMC) methods to the variational optimization of quantum wave function parameters. While many deterministic applications impose stochasticity, the SGA fruitfully takes advantage of the natural stochasticity already present in QMC in order to utilize a small number of QMC samples and approach the minimum more quickly by averaging out the random noise in the samples. The increasing efficiency of the method for systems with larger numbers of particles, and its nearly ideal scaling when running on parallelized processors, is evidence that the SGA is well suited for the study of nanoclusters. In this thesis, I discuss the SGA algorithm in detail. I also describe its application to both quantum dots, and to the Resonating Valence Bond wave function (RVB). The RVB is a sophisticated model of electronic systems that captures electronic correlation effects directly and that improves the nodal structure of quantum wave functions. The RVB is receiving renewed attention in the study of nanoclusters due to the fact that calculations of RVB wave functions have become feasible with recent advances in computer hardware and software.
A stochastic model for solute transport in macroporous soils
Bruggeman, A.C.; Mostaghimi, S.; Brannan, K.M.
1999-12-01
A stochastic, physically based, finite element model for simulating flow and solute transport in soils with macropores (MICMAC) was developed. The MICMAC model simulates preferential movement of water and solutes using a cylindrical macropore located in the center of a soil column. MICMAC uses Monte Carlo simulation to represent the stochastic processes inherent to the soil-water system. The model simulates a field as a collection of non-interacting soil columns. The random soil properties are assumed to be stationary in the horizontal direction, and ergodic over the field. A routine for the generation of correlated, non-normal random variates was developed for MICMAC's stochastic component. The model was applied to fields located in Nomini Creek Watershed, Virginia. Extensive field data were collected in fields that use either conventional or no-tillage for the evaluation of the MICMAC model. The field application suggested that the model underestimated the fast leaching of water and solutes from the root zone. However, the computed results were substantially better than the results obtained when no preferential flow component was included in the model.
Fluorescence Correlation Spectroscopy and Nonlinear Stochastic Reaction-Diffusion
Del Razo, Mauricio; Pan, Wenxiao; Qian, Hong; Lin, Guang
2014-05-30
The currently existing theory of fluorescence correlation spectroscopy (FCS) is based on the linear fluctuation theory originally developed by Einstein, Onsager, Lax, and others as a phenomenological approach to equilibrium fluctuations in bulk solutions. For mesoscopic reaction-diffusion systems with nonlinear chemical reactions among a small number of molecules, a situation often encountered in single-cell biochemistry, it is expected that FCS time correlation functions of a reaction-diffusion system can deviate from the classic results of Elson and Magde [Biopolymers (1974) 13:1-27]. We first discuss this nonlinear effect for reaction systems without diffusion. For nonlinear stochastic reaction-diffusion systems there are no closed solutions; therefore, stochastic Monte-Carlo simulations are carried out. We show that the deviation is small for a simple bimolecular reaction; the most significant deviations occur when the number of molecules is small and of the same order. Extending Delbrück-Gillespie’s theory for stochastic nonlinear reactions with rapidly stirring to reaction-diffusion systems provides a mesoscopic model for chemical and biochemical reactions at nanometric and mesoscopic level such as a single biological cell.
A stochastic transcriptional switch model for single cell imaging data.
Hey, Kirsty L; Momiji, Hiroshi; Featherstone, Karen; Davis, Julian R E; White, Michael R H; Rand, David A; Finkenstädt, Bärbel
2015-10-01
Gene expression is made up of inherently stochastic processes within single cells and can be modeled through stochastic reaction networks (SRNs). In particular, SRNs capture the features of intrinsic variability arising from intracellular biochemical processes. We extend current models for gene expression to allow the transcriptional process within an SRN to follow a random step or switch function which may be estimated using reversible jump Markov chain Monte Carlo (MCMC). This stochastic switch model provides a generic framework to capture many different dynamic features observed in single cell gene expression. Inference for such SRNs is challenging due to the intractability of the transition densities. We derive a model-specific birth-death approximation and study its use for inference in comparison with the linear noise approximation where both approximations are considered within the unifying framework of state-space models. The methodology is applied to synthetic as well as experimental single cell imaging data measuring expression of the human prolactin gene in pituitary cells.
Zhang, Fan; Gao, Yan; Luo, Yazhi; Chen, Zhangyuan; Xu, Anshi
2010-04-26
We propose a stochastic bit error ratio estimation approach based on a statistical analysis of the retrieved signal phase for coherent optical QPSK systems with digital carrier phase recovery. A family of the generalized exponential function is applied to fit the probability density function of the signal samples. The method provides reasonable performance estimation in presence of both linear and nonlinear transmission impairments while reduces the computational intensity greatly compared to Monte Carlo simulation.
Stochastic Models of Polymer Systems
2016-01-01
algorithms for big data applications . (2) We studied stochastic dynamics of polymer systems in the mean field limit. (3) We studied noisy Hegselmann-Krause...DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS 7. PERFORMING ORGANIZATION NAMES AND ADDRESSES 15. SUBJECT TERMS b. ABSTRACT 2. REPORT TYPE 17. LIMITATION...Distribution Unlimited Final Report: Stochastic Models of Polymer Systems The views, opinions and/or findings contained in this report are those of the
Stochastic roots of growth phenomena
NASA Astrophysics Data System (ADS)
De Lauro, E.; De Martino, S.; De Siena, S.; Giorno, V.
2014-05-01
We show that the Gompertz equation describes the evolution in time of the median of a geometric stochastic process. Therefore, we induce that the process itself generates the growth. This result allows us further to exploit a stochastic variational principle to take account of self-regulation of growth through feedback of relative density variations. The conceptually well defined framework so introduced shows its usefulness by suggesting a form of control of growth by exploiting external actions.
Isotropic Monte Carlo Grain Growth
Mason, J.
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
Stochastic models for cell motion and taxis.
Ionides, Edward L; Fang, Kathy S; Isseroff, R Rivkah; Oster, George F
2004-01-01
Certain biological experiments investigating cell motion result in time lapse video microscopy data which may be modeled using stochastic differential equations. These models suggest statistics for quantifying experimental results and testing relevant hypotheses, and carry implications for the qualitative behavior of cells and for underlying biophysical mechanisms. Directional cell motion in response to a stimulus, termed taxis, has previously been modeled at a phenomenological level using the Keller-Segel diffusion equation. The Keller-Segel model cannot distinguish certain modes of taxis, and this motivates the introduction of a richer class of models which is nevertheless still amenable to statistical analysis. A state space model formulation is used to link models proposed for cell velocity to observed data. Sequential Monte Carlo methods enable parameter estimation via maximum likelihood for a range of applicable models. One particular experimental situation, involving the effect of an electric field on cell behavior, is considered in detail. In this case, an Ornstein- Uhlenbeck model for cell velocity is found to compare favorably with a nonlinear diffusion model.
Parallel stochastic systems biology in the cloud.
Aldinucci, Marco; Torquati, Massimo; Spampinato, Concetto; Drocco, Maurizio; Misale, Claudia; Calcagno, Cristina; Coppo, Mario
2014-09-01
The stochastic modelling of biological systems, coupled with Monte Carlo simulation of models, is an increasingly popular technique in bioinformatics. The simulation-analysis workflow may result computationally expensive reducing the interactivity required in the model tuning. In this work, we advocate the high-level software design as a vehicle for building efficient and portable parallel simulators for the cloud. In particular, the Calculus of Wrapped Components (CWC) simulator for systems biology, which is designed according to the FastFlow pattern-based approach, is presented and discussed. Thanks to the FastFlow framework, the CWC simulator is designed as a high-level workflow that can simulate CWC models, merge simulation results and statistically analyse them in a single parallel workflow in the cloud. To improve interactivity, successive phases are pipelined in such a way that the workflow begins to output a stream of analysis results immediately after simulation is started. Performance and effectiveness of the CWC simulator are validated on the Amazon Elastic Compute Cloud.
Stochastic uncertainty analysis for unconfined flow systems
Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming
2006-01-01
A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen-Loeve decomposition-based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen-Loeve decomposition, polynomial expansion, and perturbation methods. The random log-transformed hydraulic conductivity field (InKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of InKS- Next, head h is decomposed as a perturbation expansion series ??A(m), where A(m) represents the mth-order head term with respect to the standard deviation of InKS. Then A(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients Ai1,i2(m)...,im are deterministic and solved sequentially from low to high expansion orders using MODFLOW-2000. Finally, the statistics of head and flux are computed using simple algebraic operations on Ai1,i2(m)...,im. A series of numerical test results in 2-D and 3-D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique. Copyright 2006 by the American Geophysical Union.
Brennan J. M.; Blaskiewicz, M.; Mernick, K.
2012-05-20
The full 6-dimensional [x,x'; y,y'; z,z'] stochastic cooling system for RHIC was completed and operational for the FY12 Uranium-Uranium collider run. Cooling enhances the integrated luminosity of the Uranium collisions by a factor of 5, primarily by reducing the transverse emittances but also by cooling in the longitudinal plane to preserve the bunch length. The components have been deployed incrementally over the past several runs, beginning with longitudinal cooling, then cooling in the vertical planes but multiplexed between the Yellow and Blue rings, next cooling both rings simultaneously in vertical (the horizontal plane was cooled by betatron coupling), and now simultaneous horizontal cooling has been commissioned. The system operated between 5 and 9 GHz and with 3 x 10{sup 8} Uranium ions per bunch and produces a cooling half-time of approximately 20 minutes. The ultimate emittance is determined by the balance between cooling and emittance growth from Intra-Beam Scattering. Specific details of the apparatus and mathematical techniques for calculating its performance have been published elsewhere. Here we report on: the method of operation, results with beam, and comparison of results to simulations.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
NASA Astrophysics Data System (ADS)
Chushak, Yaroslav; Foy, Brent; Frazier, John
2008-03-01
At the functional level, all biological processes in cells can be represented as a series of biochemical reactions that are stochastic in nature. We have developed a software package called Biomolecular Network Simulator (BNS) that uses a stochastic approach to model and simulate complex biomolecular reaction networks. Two simulation algorithms - the exact Gillespie stochastic simulation algorithm and the approximate adaptive tau-leaping algorithm - are implemented for generating Monte Carlo trajectories that describe the evolution of a system of biochemical reactions. The software uses a combination of MATLAB and C-coded functions and is parallelized with the Message Passing Interface (MPI) library to run on multiprocessor architectures. We will present a brief description of the Biomolecular Network Simulator software along with some examples.
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
Numerical treatment of stochastic river quality models driven by colored noise
NASA Astrophysics Data System (ADS)
Stijnen, J. W.; Heemink, A. W.; Ponnambalam, K.
2003-03-01
Monte Carlo simulation is a popular method of risk and uncertainty analysis in oceanographic, atmospheric, and environmental applications. It is common practice to introduce a stochastic part to an already existing deterministic model and, after many simulations, to provide the user with statistics of the model outcome. The underlying deterministic model is often a discretization of a set of partial differential equations describing physical processes such as transport, turbulence, buoyancy effects, and continuity. Much effort is also put into deriving numerically efficient schemes for the time integration. The resulting model is often quite large and complex. In sharp contrast the stochastic extension used for Monte Carlo experiments is usually achieved by adding white noise. Unfortunately, the order of time integration in the stochastic model is reduced compared to the deterministic model because white noise is not a smooth process. Instead of completely replacing the old numerical scheme and implementing a higher-order scheme for stochastic differential equations, we suggest a different approach that is able to use existing numerical schemes. The method uses a smooth colored noise process as the driving force, resulting in a higher order of convergence. We show promising results from numerical experiments, including parametric uncertainty.
Stochastic generation of hourly rainstorm events in Johor
Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli
2015-02-03
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was used in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.
Segmentation of stochastic images with a stochastic random walker method.
Pätz, Torben; Preusser, Tobias
2012-05-01
We present an extension of the random walker segmentation to images with uncertain gray values. Such gray-value uncertainty may result from noise or other imaging artifacts or more general from measurement errors in the image acquisition process. The purpose is to quantify the influence of the gray-value uncertainty onto the result when using random walker segmentation. In random walker segmentation, a weighted graph is built from the image, where the edge weights depend on the image gradient between the pixels. For given seed regions, the probability is evaluated for a random walk on this graph starting at a pixel to end in one of the seed regions. Here, we extend this method to images with uncertain gray values. To this end, we consider the pixel values to be random variables (RVs), thus introducing the notion of stochastic images. We end up with stochastic weights for the graph in random walker segmentation and a stochastic partial differential equation (PDE) that has to be solved. We discretize the RVs and the stochastic PDE by the method of generalized polynomial chaos, combining the recent developments in numerical methods for the discretization of stochastic PDEs and an interactive segmentation algorithm. The resulting algorithm allows for the detection of regions where the segmentation result is highly influenced by the uncertain pixel values. Thus, it gives a reliability estimate for the resulting segmentation, and it furthermore allows determining the probability density function of the segmented object volume.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Modeling Bacterial Population Growth from Stochastic Single-Cell Dynamics
Molina, Ignacio; Theodoropoulos, Constantinos
2014-01-01
A few bacterial cells may be sufficient to produce a food-borne illness outbreak, provided that they are capable of adapting and proliferating on a food matrix. This is why any quantitative health risk assessment policy must incorporate methods to accurately predict the growth of bacterial populations from a small number of pathogens. In this aim, mathematical models have become a powerful tool. Unfortunately, at low cell concentrations, standard deterministic models fail to predict the fate of the population, essentially because the heterogeneity between individuals becomes relevant. In this work, a stochastic differential equation (SDE) model is proposed to describe variability within single-cell growth and division and to simulate population growth from a given initial number of individuals. We provide evidence of the model ability to explain the observed distributions of times to division, including the lag time produced by the adaptation to the environment, by comparing model predictions with experiments from the literature for Escherichia coli, Listeria innocua, and Salmonella enterica. The model is shown to accurately predict experimental growth population dynamics for both small and large microbial populations. The use of stochastic models for the estimation of parameters to successfully fit experimental data is a particularly challenging problem. For instance, if Monte Carlo methods are employed to model the required distributions of times to division, the parameter estimation problem can become numerically intractable. We overcame this limitation by converting the stochastic description to a partial differential equation (backward Kolmogorov) instead, which relates to the distribution of division times. Contrary to previous stochastic formulations based on random parameters, the present model is capable of explaining the variability observed in populations that result from the growth of a small number of initial cells as well as the lack of it compared to
Modeling bacterial population growth from stochastic single-cell dynamics.
Alonso, Antonio A; Molina, Ignacio; Theodoropoulos, Constantinos
2014-09-01
A few bacterial cells may be sufficient to produce a food-borne illness outbreak, provided that they are capable of adapting and proliferating on a food matrix. This is why any quantitative health risk assessment policy must incorporate methods to accurately predict the growth of bacterial populations from a small number of pathogens. In this aim, mathematical models have become a powerful tool. Unfortunately, at low cell concentrations, standard deterministic models fail to predict the fate of the population, essentially because the heterogeneity between individuals becomes relevant. In this work, a stochastic differential equation (SDE) model is proposed to describe variability within single-cell growth and division and to simulate population growth from a given initial number of individuals. We provide evidence of the model ability to explain the observed distributions of times to division, including the lag time produced by the adaptation to the environment, by comparing model predictions with experiments from the literature for Escherichia coli, Listeria innocua, and Salmonella enterica. The model is shown to accurately predict experimental growth population dynamics for both small and large microbial populations. The use of stochastic models for the estimation of parameters to successfully fit experimental data is a particularly challenging problem. For instance, if Monte Carlo methods are employed to model the required distributions of times to division, the parameter estimation problem can become numerically intractable. We overcame this limitation by converting the stochastic description to a partial differential equation (backward Kolmogorov) instead, which relates to the distribution of division times. Contrary to previous stochastic formulations based on random parameters, the present model is capable of explaining the variability observed in populations that result from the growth of a small number of initial cells as well as the lack of it compared to
Shipping Cask Studies with MOX Fuel
Pavlovichev, A.M.
2001-05-17
Tasks of nuclear safety assurance for storage and transport of fresh mixed uranium-plutonium fuel of the VVER-1000 reactor are considered in the view of 3 MOX LTAs introduction into the core. The precise code MCU that realizes the Monte Carlo method is used for calculations.
Stacking with stochastic cooling
NASA Astrophysics Data System (ADS)
Caspers, Fritz; Möhl, Dieter
2004-10-01
Accumulation of large stacks of antiprotons or ions with the aid of stochastic cooling is more delicate than cooling a constant intensity beam. Basically the difficulty stems from the fact that the optimized gain and the cooling rate are inversely proportional to the number of particles 'seen' by the cooling system. Therefore, to maintain fast stacking, the newly injected batch has to be strongly 'protected' from the Schottky noise of the stack. Vice versa the stack has to be efficiently 'shielded' against the high gain cooling system for the injected beam. In the antiproton accumulators with stacking ratios up to 105 the problem is solved by radial separation of the injection and the stack orbits in a region of large dispersion. An array of several tapered cooling systems with a matched gain profile provides a continuous particle flux towards the high-density stack core. Shielding of the different systems from each other is obtained both through the spatial separation and via the revolution frequencies (filters). In the 'old AA', where the antiproton collection and stacking was done in one single ring, the injected beam was further shielded during cooling by means of a movable shutter. The complexity of these systems is very high. For more modest stacking ratios, one might use azimuthal rather than radial separation of stack and injected beam. Schematically half of the circumference would be used to accept and cool new beam and the remainder to house the stack. Fast gating is then required between the high gain cooling of the injected beam and the low gain stack cooling. RF-gymnastics are used to merge the pre-cooled batch with the stack, to re-create free space for the next injection, and to capture the new batch. This scheme is less demanding for the storage ring lattice, but at the expense of some reduction in stacking rate. The talk reviews the 'radial' separation schemes and also gives some considerations to the 'azimuthal' schemes.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
Stamatakis, Michail; Vlachos, Dionisios G
2011-12-14
Well-mixed and lattice-based descriptions of stochastic chemical kinetics have been extensively used in the literature. Realizations of the corresponding stochastic processes are obtained by the Gillespie stochastic simulation algorithm and lattice kinetic Monte Carlo algorithms, respectively. However, the two frameworks have remained disconnected. We show the equivalence of these frameworks whereby the stochastic lattice kinetics reduces to effective well-mixed kinetics in the limit of fast diffusion. In the latter, the lattice structure appears implicitly, as the lumped rate of bimolecular reactions depends on the number of neighbors of a site on the lattice. Moreover, we propose a mapping between the stochastic propensities and the deterministic rates of the well-mixed vessel and lattice dynamics that illustrates the hierarchy of models and the key parameters that enable model reduction.
Distributional monte carlo methods for the boltzmann equation
NASA Astrophysics Data System (ADS)
Schrock, Christopher R.
Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.
Idaho National Laboratory - Steve Herring, Jim O'Brien, Carl Stoots
2016-07-12
Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhouse gass Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhous
Idaho National Laboratory - Steve Herring, Jim O'Brien, Carl Stoots
2008-03-26
Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhouse gass Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhous
NASA Astrophysics Data System (ADS)
1984-12-01
The US Department of Energy (DOE), Office of Fossil Energy, has supported and managed a fuel cell research and development (R and D) program since 1976. Responsibility for implementing DOE's fuel cell program, which includes activities related to both fuel cells and fuel cell systems, has been assigned to the Morgantown Energy Technology Center (METC) in Morgantown, West Virginia. The total United States effort of the private and public sectors in developing fuel cell technology is referred to as the National Fuel Cell Program (NFCP). The goal of the NFCP is to develop fuel cell power plants for base-load and dispersed electric utility systems, industrial cogeneration, and on-site applications. To achieve this goal, the fuel cell developers, electric and gas utilities, research institutes, and Government agencies are working together. Four organized groups are coordinating the diversified activities of the NFCP. The status of the overall program is reviewed in detail.
Stochastic simulation in systems biology
Székely, Tamás; Burrage, Kevin
2014-01-01
Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest. PMID:25505503
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Stochastic simulation in systems biology.
Székely, Tamás; Burrage, Kevin
2014-11-01
Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest.
Application of stochastic radiative transfer to remote sensing of vegetation
NASA Astrophysics Data System (ADS)
Shabanov, Nikolay V.
2002-01-01
The availability of high quality remote sensing data during the past decade provides an impetus for the development of methods that facilitate accurate retrieval of structural and optical properties of vegetation required for the study of global vegetation dynamics. Empirical and statistical methods have proven to be quite useful in many applications, but they often do not shed light on the underlying physical processes. Approaches based on radiative transfer and the physics of matter-energy interaction are therefore required to gain insight into the mechanisms responsible for signal generation. The goal of this dissertation is the development of advanced methods based on radiative transfer for the retrieval of biophysical information from satellite data. Classical radiative transfer theory is applicable to homogeneous vegetation and is generally inaccurate in characterizing the radiation regime in natural vegetation communities, such as forests or woodlands. A stochastic approach to radiative transfer was introduced in this dissertation to describe the radiation regime in discontinuous vegetation canopies. The resulting stochastic model was implemented and tested with field data and Monte Carlo simulations. The effect of gaps on radiation fluxes in vegetation canopies was quantified analytically and compared to classical representations. Next, the stochastic theory was applied to vegetation remote sensing in two case studies. First, the radiative transfer principles underlying an algorithm for leaf area index (LAI) retrieval were studied with data from Harvard Forest. The classical expression for uncollided radiation was modified according to stochastic principles to explain radiometric measurements and vegetation structure. In the second case study, vegetation dynamics in the northern latitudes inferred from the Pathfinder Advanced Very High-Resolution Radiometer Land data were investigated. The signatures of interannual and seasonal variation recorded in the
2005-10-04
tactical ground mobility and increasing operational reach • Identify, review, and assess – Technologies for reducing fuel consumption, including...T I O N S A C T I O N S TOR Focus - Tactical ground mobility - Operational reach - Not A/C, Ships, or troops Hybrid Electric Vehicle Fuel Management...Fuel Management During Combat Operations Energy Fundamentals • Energy Density • Tactical Mobility • Petroleum Use • Fuel Usage (TWV) • TWV OP TEMPO TOR
ERIC Educational Resources Information Center
Crank, Ron
This instructional unit is one of 10 developed by students on various energy-related areas that deals specifically with fossil fuels. Some topics covered are historic facts, development of fuels, history of oil production, current and future trends of the oil industry, refining fossil fuels, and environmental problems. Material in each unit may…
Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An
2014-11-01
A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments.
Monte Carlo Methods in ICF (LIRPP Vol. 13)
NASA Astrophysics Data System (ADS)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Intrinsic optimization using stochastic nanomagnets
NASA Astrophysics Data System (ADS)
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-03-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.
Intrinsic optimization using stochastic nanomagnets
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-01-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053
Nonlinear optimization for stochastic simulations.
Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.
2003-12-01
This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
Stochastic excitation of stellar oscillations
NASA Astrophysics Data System (ADS)
Samadi, Reza
2001-05-01
Since more than about thirty years, solar oscillations are thought to be excited stochastically by the turbulent motions in the solar convective zone. It is currently believed that oscillations of stars lower than 2 solar masses - which possess an upper convective zone - are excited stochastically by turbulent convection in their outer layers. Providing that accurate measurements of the oscillation amplitudes and damping rates are available it is possible to evaluate the power injected into the modes and thus - by comparison with the observations - to constrain current theories. A recent theoretical work (Samadi & Goupil, 2001; Samadi et al., 2001) supplements and reinforces the theory of stochastic excitation of star vibrations. This process was generalized to a global description of the turbulent state of their convective zone. The comparison between observation and theory, thus generalized, will allow to better know the turbulent spectrum of stars, and this in particular thanks to the COROT mission.
Principal axes for stochastic dynamics
NASA Astrophysics Data System (ADS)
Vasconcelos, V. V.; Raischel, F.; Haase, M.; Peinke, J.; Wächter, M.; Lind, P. G.; Kleinhans, D.
2011-09-01
We introduce a general procedure for directly ascertaining how many independent stochastic sources exist in a complex system modeled through a set of coupled Langevin equations of arbitrary dimension. The procedure is based on the computation of the eigenvalues and the corresponding eigenvectors of local diffusion matrices. We demonstrate our algorithm by applying it to two examples of systems showing Hopf bifurcation. We argue that computing the eigenvectors associated to the eigenvalues of the diffusion matrix at local mesh points in the phase space enables one to define vector fields of stochastic eigendirections. In particular, the eigenvector associated to the lowest eigenvalue defines the path of minimum stochastic forcing in phase space, and a transform to a new coordinate system aligned with the eigenvectors can increase the predictability of the system.
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Fingerprints of determinism in an apparently stochastic corrosion process.
Rivera, M; Uruchurtu-Chavarín, J; Parmananda, P
2003-05-02
We detect hints of determinism in an apparently stochastic corrosion problem. This experimental system has industrial relevance as it mimics the corrosion processes of pipelines transporting water, hydrocarbons, or other fuels to remote destinations. We subject this autonomous system to external periodic perturbations. Keeping the amplitude of the superimposed perturbations constant and varying the frequency, the system's response is analyzed. It reveals the presence of an optimal forcing frequency for which maximal response is achieved. These results are consistent with those for a deterministic system and indicate a classical resonance between the forcing signal and the autonomous dynamics. Numerical studies using a generic corrosion model are carried out to complement the experimental findings.
NASA Technical Reports Server (NTRS)
Lacksonen, Thomas A.
1994-01-01
Small space flight project design at NASA Langley Research Center goes through a multi-phase process from preliminary analysis to flight operations. The process insures that each system achieves its technical objectives with demonstrated quality and within planned budgets and schedules. A key technical component of early phases is decision analysis, which is a structure procedure for determining the best of a number of feasible concepts based upon project objectives. Feasible system concepts are generated by the designers and analyzed for schedule, cost, risk, and technical measures. Each performance measure value is normalized between the best and worst values and a weighted average score of all measures is calculated for each concept. The concept(s) with the highest scores are retained, while others are eliminated from further analysis. This project automated and enhanced the decision analysis process. Automation of the decision analysis process was done by creating a user-friendly, menu-driven, spreadsheet macro based decision analysis software program. The program contains data entry dialog boxes, automated data and output report generation, and automated output chart generation. The enhancements to the decision analysis process permit stochastic data entry and analysis. Rather than enter single measure values, the designers enter the range and most likely value for each measure and concept. The data can be entered at the system or subsystem level. System level data can be calculated as either sum, maximum, or product functions of the subsystem data. For each concept, the probability distributions are approximated for each measure and the total score for each concept as either constant, triangular, normal, or log-normal distributions. Based on these distributions, formulas are derived for the probability that the concept meets any given constraint, the probability that the concept meets all constraints, and the probability that the concept is within a given
NASA Astrophysics Data System (ADS)
Miyamoto, Hitoshi; Kimura, Ryo
2016-09-01
This paper proposes a stochastic evaluation method for examining tree population states in a river cross section using an integrated model with Monte Carlo simulation. The integrated model consists of four processes as submodels, i.e., tree population dynamics, flow discharge stochasticity, stream hydraulics, and channel geomorphology. A floodplain of the Kako River in Japan was examined as a test site, which is currently well vegetated and features many willows that have been growing in both individual size and overall population over the last several decades. The model was used to stochastically evaluate the effects of hydrologic and geomorphologic changes on tree population dynamics through the Monte Carlo simulation. The effects including the magnitude of flood impacts and the relative change in the floodplain level are examined using very simple scenarios for flow regulation, climate change, and channel form changes. The stochastic evaluation method revealed a tradeoff point in floodplain levels, at which the tendency of a fully vegetated state switches to that of a bare floodplain under small impacts of flood. It is concluded from these results that the states of tree population in a floodplain can be determined by the mutual interactions among flood impacts, seedling recruitment, tree growth, and channel geomorphology. These interactions make it difficult to obtain a basic understanding of tree population dynamics from a field study of a specific floodplain. The stochastic approach used in this paper could constitute an effective method for evaluating fundamental channel characteristics for a vegetated floodplain.
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
Partial ASL extensions for stochastic programming.
Gay, David
2010-03-31
partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications
NASA Technical Reports Server (NTRS)
Grobman, J. S.; Butze, H. F.; Friedman, R.; Antoine, A. C.; Reynolds, T. W.
1977-01-01
Potential problems related to the use of alternative aviation turbine fuels are discussed and both ongoing and required research into these fuels is described. This discussion is limited to aviation turbine fuels composed of liquid hydrocarbons. The advantages and disadvantages of the various solutions to the problems are summarized. The first solution is to continue to develop the necessary technology at the refinery to produce specification jet fuels regardless of the crude source. The second solution is to minimize energy consumption at the refinery and keep fuel costs down by relaxing specifications.
Bar shapes and orbital stochasticity
Athanassoula, E. )
1990-06-01
Several independent lines of evidence suggest that the isophotes or isodensities of bars in barred galaxies are not really elliptical in shape but more rectangular. The effect this might have on the orbits in two different types of bar potentials is studied, and it is found that in both cases the percentage of stochastic orbits is much larger when the shapes are more rectangularlike or, equivalently, when the m = 4 components are more important. This can be understood with the help of the Chirikov criterion, which can predict the limit for the onset of global stochasticity. 9 refs.
Stochastic Kinetics of Nascent RNA
NASA Astrophysics Data System (ADS)
Xu, Heng; Skinner, Samuel O.; Sokac, Anna Marie; Golding, Ido
2016-09-01
The stochastic kinetics of transcription is typically inferred from the distribution of RNA numbers in individual cells. However, cellular RNA reflects additional processes downstream of transcription, hampering this analysis. In contrast, nascent (actively transcribed) RNA closely reflects the kinetics of transcription. We present a theoretical model for the stochastic kinetics of nascent RNA, which we solve to obtain the probability distribution of nascent RNA per gene. The model allows us to evaluate the kinetic parameters of transcription from single-cell measurements of nascent RNA. The model also predicts surprising discontinuities in the distribution of nascent RNA, a feature which we verify experimentally.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state space formore » which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Markov Chain Monte Carlo Bayesian Learning for Neural Networks
NASA Technical Reports Server (NTRS)
Goodrich, Michael S.
2011-01-01
Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.
Kinetic Monte Carlo simulation of the classical nucleation process
NASA Astrophysics Data System (ADS)
Filipponi, A.; Giammatteo, P.
2016-12-01
We implemented a kinetic Monte Carlo computer simulation of the nucleation process in the framework of the coarse grained scenario of the Classical Nucleation Theory (CNT). The computational approach is efficient for a wide range of temperatures and sample sizes and provides a reliable simulation of the stochastic process. The results for the nucleation rate are in agreement with the CNT predictions based on the stationary solution of the set of differential equations for the continuous variables representing the average population distribution of nuclei size. Time dependent nucleation behavior can also be simulated with results in agreement with previous approaches. The method, here established for the case in which the excess free-energy of a crystalline nucleus is a smooth-function of the size, can be particularly useful when more complex descriptions are required.
Applying diffusion-based Markov chain Monte Carlo
Paul, Rajib; Berliner, L. Mark
2017-01-01
We examine the performance of a strategy for Markov chain Monte Carlo (MCMC) developed by simulating a discrete approximation to a stochastic differential equation (SDE). We refer to the approach as diffusion MCMC. A variety of motivations for the approach are reviewed in the context of Bayesian analysis. In particular, implementation of diffusion MCMC is very simple to set-up, even in the presence of nonlinear models and non-conjugate priors. Also, it requires comparatively little problem-specific tuning. We implement the algorithm and assess its performance for both a test case and a glaciological application. Our results demonstrate that in some settings, diffusion MCMC is a faster alternative to a general Metropolis-Hastings algorithm. PMID:28301529
Accelerating particle-in-cell simulations using multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2015-11-01
Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.
Quantum Monte Carlo with directed loops.
Syljuåsen, Olav F; Sandvik, Anders W
2002-10-01
We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.
NASA Astrophysics Data System (ADS)
Michta, Mariusz
2017-02-01
In the paper we study properties of solutions to stochastic differential inclusions and set-valued stochastic differential equations with respect to semimartingale integrators. We present new connections between their solutions. In particular, we show that attainable sets of solutions to stochastic inclusions are subsets of values of multivalued solutions of certain set-valued stochastic equations. We also show that every solution to stochastic inclusion is a continuous selection of a multivalued solution of an associated set-valued stochastic equation. The results obtained in the paper generalize results dealing with this topic known both in deterministic and stochastic cases.
Buckling analysis of imperfect I-section beam-columns with stochastic shell finite elements
NASA Astrophysics Data System (ADS)
Schillinger, Dominik; Papadopoulos, Vissarion; Bischoff, Manfred; Papadrakakis, Manolis
2010-08-01
Buckling loads of thin-walled I-section beam-columns exhibit a wide stochastic scattering due to the uncertainty of imperfections. The present paper proposes a finite element based methodology for the stochastic buckling simulation of I-sections, which uses random fields to accurately describe the fluctuating size and spatial correlation of imperfections. The stochastic buckling behaviour is evaluated by crude Monte-Carlo simulation, based on a large number of I-section samples, which are generated by spectral representation and subsequently analyzed by non-linear shell finite elements. The application to an example I-section beam-column demonstrates that the simulated buckling response is in good agreement with experiments and follows key concepts of imperfection triggered buckling. The derivation of the buckling load variability and the stochastic interaction curve for combined compression and major axis bending as well as stochastic sensitivity studies for thickness and geometric imperfections illustrate potential benefits of the proposed methodology in buckling related research and applications.
A method for solving stochastic equations by reduced order models and local approximations
Grigoriu, M.
2012-08-01
A method is proposed for solving equations with random entries, referred to as stochastic equations (SEs). The method is based on two recent developments. The first approximates the response surface giving the solution of a stochastic equation as a function of its random parameters by a finite set of hyperplanes tangent to it at expansion points selected by geometrical arguments. The second approximates the vector of random parameters in the definition of a stochastic equation by a simple random vector, referred to as stochastic reduced order model (SROM), and uses it to construct a SROM for the solution of this equation. The proposed method is a direct extension of these two methods. It uses SROMs to select expansion points, rather than selecting these points by geometrical considerations, and represents the solution by linear and/or higher order local approximations. The implementation and the performance of the method are illustrated by numerical examples involving random eigenvalue problems and stochastic algebraic/differential equations. The method is conceptually simple, non-intrusive, efficient relative to classical Monte Carlo simulation, accurate, and guaranteed to converge to the exact solution.
NASA Astrophysics Data System (ADS)
Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali
2017-01-01
In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.
NASA Astrophysics Data System (ADS)
Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.
2008-07-01
We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.
Variational principles for stochastic soliton dynamics
Holm, Darryl D.; Tyranowski, Tomasz M.
2016-01-01
We develop a variational method of deriving stochastic partial differential equations whose solutions follow the flow of a stochastic vector field. As an example in one spatial dimension, we numerically simulate singular solutions (peakons) of the stochastically perturbed Camassa–Holm (CH) equation derived using this method. These numerical simulations show that peakon soliton solutions of the stochastically perturbed CH equation persist and provide an interesting laboratory for investigating the sensitivity and accuracy of adding stochasticity to finite dimensional solutions of stochastic partial differential equations. In particular, some choices of stochastic perturbations of the peakon dynamics by Wiener noise (canonical Hamiltonian stochastic deformations, CH-SD) allow peakons to interpenetrate and exchange order on the real line in overtaking collisions, although this behaviour does not occur for other choices of stochastic perturbations which preserve the Euler–Poincaré structure of the CH equation (parametric stochastic deformations, P-SD), and it also does not occur for peakon solutions of the unperturbed deterministic CH equation. The discussion raises issues about the science of stochastic deformations of finite-dimensional approximations of evolutionary partial differential equation and the sensitivity of the resulting solutions to the choices made in stochastic modelling. PMID:27118922
Variational principles for stochastic soliton dynamics.
Holm, Darryl D; Tyranowski, Tomasz M
2016-03-01
We develop a variational method of deriving stochastic partial differential equations whose solutions follow the flow of a stochastic vector field. As an example in one spatial dimension, we numerically simulate singular solutions (peakons) of the stochastically perturbed Camassa-Holm (CH) equation derived using this method. These numerical simulations show that peakon soliton solutions of the stochastically perturbed CH equation persist and provide an interesting laboratory for investigating the sensitivity and accuracy of adding stochasticity to finite dimensional solutions of stochastic partial differential equations. In particular, some choices of stochastic perturbations of the peakon dynamics by Wiener noise (canonical Hamiltonian stochastic deformations, CH-SD) allow peakons to interpenetrate and exchange order on the real line in overtaking collisions, although this behaviour does not occur for other choices of stochastic perturbations which preserve the Euler-Poincaré structure of the CH equation (parametric stochastic deformations, P-SD), and it also does not occur for peakon solutions of the unperturbed deterministic CH equation. The discussion raises issues about the science of stochastic deformations of finite-dimensional approximations of evolutionary partial differential equation and the sensitivity of the resulting solutions to the choices made in stochastic modelling.
MontePython: Implementing Quantum Monte Carlo using Python
NASA Astrophysics Data System (ADS)
Nilsen, Jon Kristian
2007-11-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.
Stochastic Energy Deployment System
2011-11-30
SEDS is an economy-wide energy model of the U.S. The model captures dynamics between supply, demand, and pricing of the major energy types consumed and produced within the U.S. These dynamics are captured by including: the effects of macroeconomics; the resources and costs of primary energy types such as oil, natural gas, coal, and biomass; the conversion of primary fuels into energy products like petroleum products, electricity, biofuels, and hydrogen; and lastly the end- use consumption attributable to residential and commercial buildings, light and heavy transportation, and industry. Projections from SEDS extend to the year 2050 by one-year time steps and are generally projected at the national level. SEDS differs from other economy-wide energy models in that it explicitly accounts for uncertainty in technology, markets, and policy. SEDS has been specifically developed to avoid the computational burden, and sometimes fruitless labor, that comes from modeling significantly low-level details. Instead, SEDS focuses on the major drivers within the energy economy and evaluates the impact of uncertainty around those drivers.
Stochastically forced zonal flows
NASA Astrophysics Data System (ADS)
Srinivasan, Kaushik
an approximate equation for the vorticity correlation function that is then solved perturbatively. The Reynolds stress of the pertubative solution can then be expressed as a function of the mean-flow and its y-derivatives. In particular, it is shown that as long as the forcing breaks mirror-symmetry, the Reynolds stress has a wave-like term, as a result of which the mean-flow is governed by a dispersive wave equation. In a separate study, Reynolds stress induced by an anisotropically forced unbounded Couette flow with uniform shear gamma, on a beta-plane, is calculated in conjunction with the eddy diffusivity of a co-evolving passive tracer. The flow is damped by linear drag on a time scale mu--1. The stochastic forcing is controlled by a parameter alpha, that characterizes whether eddies are elongated along the zonal direction (alpha < 0), the meridional direction (alpha > 0) or are isotropic (alpha = 0). The Reynolds stress varies linearly with alpha and non-linearly and non-monotonically with gamma; but the Reynolds stress is independent of beta. For positive values of alpha, the Reynolds stress displays an "anti-frictional" effect (energy is transferred from the eddies to the mean flow) and a frictional effect for negative values of alpha. With gamma = beta =0, the meridional tracer eddy diffusivity is v'2/(2mu), where v' is the meridional eddy velocity. In general, beta and gamma suppress the diffusivity below v'2/(2mu).
Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates
Perfetti, Christopher M.; Rearden, Bradley T.
2015-01-01
This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Stochastic cooling: recent theoretical directions
Bisognano, J.
1983-03-01
A kinetic-equation derivation of the stochastic-cooling Fokker-Planck equation of correlation is introduced to describe both the Schottky spectrum and signal suppression. Generalizations to nonlinear gain and coupling between degrees of freedom are presented. Analysis of bunch beam cooling is included.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Brownian motors and stochastic resonance.
Mateos, José L; Alatriste, Fernando R
2011-12-01
We study the transport properties for a walker on a ratchet potential. The walker consists of two particles coupled by a bistable potential that allow the interchange of the order of the particles while moving through a one-dimensional asymmetric periodic ratchet potential. We consider the stochastic dynamics of the walker on a ratchet with an external periodic forcing, in the overdamped case. The coupling of the two particles corresponds to a single effective particle, describing the internal degree of freedom, in a bistable potential. This double-well potential is subjected to both a periodic forcing and noise and therefore is able to provide a realization of the phenomenon of stochastic resonance. The main result is that there is an optimal amount of noise where the amplitude of the periodic response of the system is maximum, a signal of stochastic resonance, and that precisely for this optimal noise, the average velocity of the walker is maximal, implying a strong link between stochastic resonance and the ratchet effect.
Stochastic resonance on a circle
Wiesenfeld, K. ); Pierson, D.; Pantazelou, E.; Dames, C.; Moss, F. )
1994-04-04
We describe a new realization of stochastic resonance, applicable to a broad class of systems, based on an underlying excitable dynamics with deterministic reinjection. A simple but general theory of such single-trigger'' systems is compared with analog simulations of the Fitzhugh-Nagumo model, as well as experimental data obtained from stimulated sensory neurons in the crayfish.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
Mohammed, Abdul Aziz Rahman, Shaik Mohmmed Haikhal Abdul; Pauzi, Anas Muhamad Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad
2016-01-22
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 ({sup 233}U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
NASA Astrophysics Data System (ADS)
Mohammed, Abdul Aziz; Pauzi, Anas Muhamad; Rahman, Shaik Mohmmed Haikhal Abdul; Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad
2016-01-01
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 (233U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
Simulating rare events in equilibrium or nonequilibrium stochastic systems.
Allen, Rosalind J; Frenkel, Daan; ten Wolde, Pieter Rein
2006-01-14
We present three algorithms for calculating rate constants and sampling transition paths for rare events in simulations with stochastic dynamics. The methods do not require a priori knowledge of the phase-space density and are suitable for equilibrium or nonequilibrium systems in stationary state. All the methods use a series of interfaces in phase space, between the initial and final states, to generate transition paths as chains of connected partial paths, in a ratchetlike manner. No assumptions are made about the distribution of paths at the interfaces. The three methods differ in the way that the transition path ensemble is generated. We apply the algorithms to kinetic Monte Carlo simulations of a genetic switch and to Langevin dynamics simulations of intermittently driven polymer translocation through a pore. We find that the three methods are all of comparable efficiency, and that all the methods are much more efficient than brute-force simulation.
Stochastic Simulations of Pattern Formation in Excitable Media
Vigelius, Matthias; Meyer, Bernd
2012-01-01
We present a method for mesoscopic, dynamic Monte Carlo simulations of pattern formation in excitable reaction–diffusion systems. Using a two-level parallelization approach, our simulations cover the whole range of the parameter space, from the noise-dominated low-particle number regime to the quasi-deterministic high-particle number limit. Three qualitatively different case studies are performed that stand exemplary for the wide variety of excitable systems. We present mesoscopic stochastic simulations of the Gray-Scott model, of a simplified model for intracellular Ca oscillations and, for the first time, of the Oregonator model. We achieve simulations with up to particles. The software and the model files are freely available and researchers can use the models to reproduce our results or adapt and refine them for further exploration. PMID:22900025
Stochastic Particle Real Time Analyzer (SPARTA) Validation and Verification Suite
Gallis, Michael A.; Koehler, Timothy P.; Plimpton, Steven J.
2014-10-01
This report presents the test cases used to verify, validate and demonstrate the features and capabilities of the first release of the 3D Direct Simulation Monte Carlo (DSMC) code SPARTA (Stochastic Real Time Particle Analyzer). The test cases included in this report exercise the most critical capabilities of the code like the accurate representation of physical phenomena (molecular advection and collisions, energy conservation, etc.) and implementation of numerical methods (grid adaptation, load balancing, etc.). Several test cases of simple flow examples are shown to demonstrate that the code can reproduce phenomena predicted by analytical solutions and theory. A number of additional test cases are presented to illustrate the ability of SPARTA to model flow around complicated shapes. In these cases, the results are compared to other well-established codes or theoretical predictions. This compilation of test cases is not exhaustive, and it is anticipated that more cases will be added in the future.
Stochastic spatial structured model for vertically and horizontally transmitted infection
NASA Astrophysics Data System (ADS)
Silva, Ana T. C.; Assis, Vladimir R. V.; Pinho, Suani T. R.; Tomé, Tânia; de Oliveira, Mário J.
2017-02-01
We study a space structured stochastic model for vertical and horizontal transmitted infection. By means of simple and pair mean-field approximation as well as Monte Carlo simulations, we construct the phase diagram, which displays four states: healthy (H), infected (I), extinct (E), and coexistent (C). In state H only healthy hosts are present, whereas in state I only infected hosts are present. The state E is characterized by the extinction of the hosts whereas in state C there is a coexistence of infected and healthy hosts. In addition to the usual scenario with continuous transition between the I, C and H phases, we found a different scenario with the suppression of the C phase and a discontinuous phase transition between I and H phases.
A stochastic model for the analysis of maximum daily temperature
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.
2016-08-01
In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.
Nonequilibrium Steady States of a Stochastic Model System.
NASA Astrophysics Data System (ADS)
Zhang, Qiwei
We study the nonequilibrium steady state of a stochastic lattice gas model, originally proposed by Katz, Lebowitz and Spohn (Phys. Rev. B 28: 1655 (1983)). Firstly, we solve the model on some small lattices exactly in order to see the general dependence of the steady state upon different parameters of the model. Nextly, we derive some analytical results for infinite lattice systems by taking some suitable limits. We then present some renormalization group results for the continuum version of the model via field theoretical techniques, the supersymmetry of the critical dynamics in zero field is also explored. Finally, we report some very recent 3-D Monte Carlo simulation results, which have been obtained by applying Multi-Spin-Coding techniques on a CDC vector supercomputer - Cyber 205 at John von Neumann Center.
Stochastic Functional Data Analysis: A Diffusion Model-based Approach
Zhu, Bin; Song, Peter X.-K.; Taylor, Jeremy M.G.
2011-01-01
Summary This paper presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov Chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate specific antigen data, where we also show how the models can be used for forecasting. PMID:21418053
Suitable Candidates for Monte Carlo Solutions.
ERIC Educational Resources Information Center
Lewis, Jerome L.
1998-01-01
Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)
A Classroom Note on Monte Carlo Integration.
ERIC Educational Resources Information Center
Kolpas, Sid
1998-01-01
The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Axial grading of inert matrix fuels
Recktenwald, G. D.; Deinert, M. R.
2012-07-01
Burning actinides in an inert matrix fuel to 750 MWd/kg IHM results in a significant reduction in transuranic isotopes. However, achieving this level of burnup in a standard light water reactor would require residence times that are twice that of uranium dioxide fuels. The reactivity of an inert matrix assembly at the end of life is less than 1/3 of its beginning of life reactivity leading to undesirable radial and axial power peaking in the reactor core. Here we show that axial grading of the inert matrix fuel rods can reduce peaking significantly. Monte Carlo simulations are used to model the assembly level power distributions in both ungraded and graded fuel rods. The results show that an axial grading of uranium dioxide and inert matrix fuels with erbium can reduces power peaking by more than 50% in the axial direction. The reduction in power peaking enables the core to operate at significantly higher power. (authors)
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Zaweski, E.F.; Niebylski, L.M.
1986-08-05
This patent describes distillate fuel for indirect injection compression ignition engines containing, in an amount sufficient to minimize coking, especially throttling nozzle coking in the prechambers or swirl chambers of indirect injection compression ignition engines operated on such fuel, at least the combination of (i) organic nitrate ignition accelerator and (ii) an esterified cycle dehydration product of sorbitol which, when added to the fuel in combination with the organic nitrate ignition accelerator minimizes the coking.
Stochastic many-body perturbation theory for anharmonic molecular vibrations
Hermes, Matthew R.; Hirata, So
2014-08-28
A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.
Hirschenhofer, J.H.
1999-07-01
This paper discusses the various types of fuel cells, the importance of cell voltage, fuel processing for natural gas, cell stacking, fuel cell plant description, advantages and disadvantages of the types of fuel cells, and applications. The types covered include: polymer electrolyte fuel cell, alkaline fuel cell, phosphoric acid fuel cell; molten carbonate fuel cell, and solid oxide fuel cell.
Lyons, W.R.
1986-03-01
Hazy fuels can be caused by the emulsification of water into the fuel during refining, blending, or transportation operations. Detergent additive packages used in gasoline tend to emulsify water into the fuel. Fuels containing water haze can cause corrosion and contamination, and support microbiological growth. This results in problems. As the result of these problems, refiners, marketers, and product pipeline companies customarily have haze specifications. The haze specification may be a specific maximum water content or simply ''bright and clear'' at a specified temperature.
Burns, L.D.
1982-07-13
Liquid hydrocarbon fuel compositions are provided containing antiknock quantities of ashless antiknock agents comprising selected furyl compounds including furfuryl alcohol, furfuryl amine, furfuryl esters, and alkyl furoates.
Not Available
1991-07-01
This paper presents the preliminary results of a review, of the experiences of Brazil, Canada, and New Zealand, which have implemented programs to encourage the use of alternative motor fuels. It will also discuss the results of a separate completed review of the Department of Energy's (DOE) progress in implementing the Alternative Motor Fuels Act of 1988. The act calls for, among other things, the federal government to use alternative-fueled vehicles in its fleet. The Persian Gulf War, environmental concerns, and the administration's National Energy Strategy have greatly heightened interest in the use of alternative fuels in this country.
Monte Carlo Simulation of Plumes Spectral Emission
2005-06-07
Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Quantum Monte Carlo Calculations of Transition Metal Oxides
NASA Astrophysics Data System (ADS)
Wagner, Lucas
2006-03-01
Quantum Monte Carlo is a powerful computational tool to study correlated systems, allowing us to explicitly treat many-body interactions with favorable scaling in the number of particles. It has been regarded as a benchmark tool for first and second row condensed matter systems, although its accuracy has not been thoroughly investigated in strongly correlated transition metal oxides. QMC has also historically suffered from the mixed estimator error in operators that do not commute with the Hamiltonian and from stochastic uncertainty, which make small energy differences unattainable. Using the Reptation Monte Carlo algorithm of Moroni and Baroni(along with contributions from others), we have developed a QMC framework that makes these previously unavailable quantities computationally feasible for systems of hundreds of electrons in a controlled and consistent way, and apply this framework to transition metal oxides. We compare these results with traditional mean-field results like the LDA and with experiment where available, focusing in particular on the polarization and lattice constants in a few interesting ferroelectric materials. This work was performed in collaboration with Lubos Mitas and Jeffrey Grossman.
Quantum Monte Carlo method applied to non-Markovian barrier transmission
NASA Astrophysics Data System (ADS)
Hupin, Guillaume; Lacroix, Denis
2010-01-01
In nuclear fusion and fission, fluctuation and dissipation arise because of the coupling of collective degrees of freedom with internal excitations. Close to the barrier, quantum, statistical, and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte Carlo method is applied to systems with quadratic potentials. In all ranges of temperature and coupling, the stochastic method matches the exact evolution, showing that non-Markovian effects can be simulated accurately. A comparison with other theories, such as Nakajima-Zwanzig or time-convolutionless, shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated by different approaches including the Markovian limit. Large differences with an exact result are seen in the latter case or when only second order in the coupling strength is considered, as is generally assumed in nuclear transport models. In contrast, if fourth order in the coupling or quantum Monte Carlo method is used, a perfect agreement is obtained.
Evaluation of Electric Power Procurement Strategies by Stochastic Dynamic Programming
NASA Astrophysics Data System (ADS)
Saisho, Yuichi; Hayashi, Taketo; Fujii, Yasumasa; Yamaji, Kenji
In deregulated electricity markets, the role of a distribution company is to purchase electricity from the wholesale electricity market at randomly fluctuating prices and to provide it to its customers at a given fixed price. Therefore the company has to take risk stemming from the uncertainties of electricity prices and/or demand fluctuation instead of the customers. The way to avoid the risk is to make a bilateral contact with generating companies or install its own power generation facility. This entails the necessity to develop a certain method to make an optimal strategy for electric power procurement. In such a circumstance, this research has the purpose for proposing a mathematical method based on stochastic dynamic programming and additionally considering the characteristics of the start-up cost of electric power generation facility to evaluate strategies of combination of the bilateral contract and power auto-generation with its own facility for procuring electric power in deregulated electricity market. In the beginning we proposed two approaches to solve the stochastic dynamic programming, and they are a Monte Carlo simulation method and a finite difference method to derive the solution of a partial differential equation of the total procurement cost of electric power. Finally we discussed the influences of the price uncertainty on optimal strategies of power procurement.
Isolating intrinsic noise sources in a stochastic genetic switch
NASA Astrophysics Data System (ADS)
Newby, Jay M.
2012-04-01
The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.
Stochastic analysis of long dry spells in Calabria (Southern Italy)
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.
2017-02-01
A deficit in precipitation may impact greatly on soil moisture, snowpack, streamflow, groundwater and reservoir storage. Among the several approaches available to investigate this phenomenon, one of the most applied is the analysis of dry spells. In this study, a non-homogeneous Poisson model has been applied to a set of high-quality daily rainfall series, recorded in southern Italy (Calabria region) during the period 1981-2010, for the stochastic analysis of dry spells. Firstly, some statistical details of the Poisson models were presented. Then, the proposed model has been applied to the analysis of long dry spells. In particular, a Monte Carlo technique was performed to reproduce the characteristics of the process. As a result, the main characteristics of the long dry spells have shown patterns clearly related to some geographical features of the study area, such as elevation and latitude. The results obtained from the stochastic modelling of the long dry spells proved that the proposed model is useful for the probability evaluation of drought, thus improving environmental planning and management.
Stochastic weighted particle methods for population balance equations
Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus
2011-08-10
Highlights: {yields} Weight transfer functions for Monte Carlo simulation of coagulation. {yields} Efficient support for single-particle growth processes. {yields} Comparisons to analytic solutions and soot formation problems. {yields} Better numerical accuracy for less common particles. - Abstract: A class of coagulation weight transfer functions is constructed, each member of which leads to a stochastic particle algorithm for the numerical treatment of population balance equations. These algorithms are based on systems of weighted computational particles and the weight transfer functions are constructed such that the number of computational particles does not change during coagulation events. The algorithms also facilitate the simulation of physical processes that change single particles, such as growth, or other surface reactions. Four members of the algorithm family have been numerically validated by comparison to analytic solutions to simple problems. Numerical experiments have been performed for complex laminar premixed flame systems in which members of the class of stochastic weighted particle methods were compared to each other and to a direct simulation algorithm. Two of the weighted algorithms have been shown to offer performance advantages over the direct simulation algorithm in situations where interest is focused on the larger particles in a system. The extent of this advantage depends on the particular system and on the quantities of interest.
Bulk characterization of (U, Pu) mixed carbide fuel for distribution of plutonium
Devi, K. V. Vrinda Khan, K. B.; Biju, K.; Kumar, Arun
2015-06-24
Homogeneous distribution of plutonium in (U, Pu) mixed fuels is important from fuel performance as well as reprocessing point of view. Radiation imaging and assay techniques are employed for the detection of Pu rich agglomerates in the fuel. A simulation study of radiation transport was carried out to analyse the technique of autoradiography so as to estimate the minimum detectability of Pu agglomerates in MC fuel with nominal PuC content of 70% using Monte Carlo simulations.
Stochastic resonance in nanomechanical systems
NASA Astrophysics Data System (ADS)
Badzey, Robert L.
The phenomenon of stochastic resonance is a counter-intuitive one: adding noise to a noisy nonlinear system under the influence of a modulation results in coherent behavior. The signature of the effect is a resonance in the signal-to-noise ratio of the response over a certain range of noise power; this behavior is absent if either the modulation or the noise are absent. Stochastic resonance has attracted considerable interest over the past several decades, having been seen in a great number of physical and biological systems. Here, observation of stochastic resonance is reported for nanomechanical systems consisting of a doubly-clamped beam resonators fabricated from single-crystal silicon. Such oscillators have been found to display nonlinear and bistable behavior under the influence of large driving forces. This bistability is exploited to produce a controllable nanomechanical switch, a device that may be used as the basis for a new generation of computational memory elements. These oscillators possess large intrinsic resonance frequencies (MHz range or higher) due to their small size and relatively high stiffness; thus they have the potential to rival the current state-of-the-art of electronic and magnetic storage technologies. This small size also allows them to be packed in densities which meet or exceed the superparamagnetic limit for magnetic storage media of 100 GB/in2. Two different doubly-clamped beams were cooled to low temperatures (300 mK--4 K), and excited with a magnetomotive technique. They were driven into the nonlinear response regime, and then modulated to induce switching between their bistable states. When the modulation was reduced, the switching died out. Application of noise, either with an external broadband source or via an increase in temperature, resulted in a distinct resonance in the signal-to-noise ratio. Aside from establishing the phenomenon of stochastic resonance in yet another physical system, the observation of this effect has
Relative frequencies of constrained events in stochastic processes: An analytical approach
NASA Astrophysics Data System (ADS)
Rusconi, S.; Akhmatskaya, E.; Sokolovski, D.; Ballard, N.; de la Cal, J. C.
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈104 ). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.
The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes
NASA Astrophysics Data System (ADS)
Bogdanova, E. V.; Kuznetsov, A. N.
2017-01-01
The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.
Integrated Stochastic Evaluation of Flood and Vegetation Dynamics in Riverine Landscapes
NASA Astrophysics Data System (ADS)
Miyamoto, H.; Kimura, R.
2014-12-01
Areal expansion of trees on gravel beds and sand bars has been a serious problem for river management in Japan. From the viewpoints of ecological restoration and flood control, it would be necessary to accurately predict the vegetation dynamics for a long period of time. This presentation tries to evaluate both vegetation overgrowth tendency and flood protection safety in an integrated manner for several vegetated channels in Kako River, Japan. The predominant tree species in Kako River are willows and bamboos. The evaluation employs a stochastic process model, which has been developed for statistically evaluating flow and vegetation status in a river course through the Monte Carlo simulation. The model for vegetation dynamics includes the effects of tree growth, mortality by flood impacts, and infant tree invasion. Through the Monte Carlo simulation for several cross sections in Kako River, responses of the vegetated channels are stochastically evaluated in terms of the changes of discharge magnitude and channel geomorphology. The result shows that the river channels with high flood protection priority are extracted from the several channel sections with the corresponding vegetation status. The present investigation suggests that the stochastic analysis could be one of the powerful diagnostic methods for river management.
Relative frequencies of constrained events in stochastic processes: An analytical approach.
Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.
Stochastic methods for uncertainty quantification in radiation transport
Fichtl, Erin D; Prinja, Anil K; Warsa, James S
2009-01-01
The use of generalized polynomial chaos (gPC) expansions is investigated for uncertainty quantification in radiation transport. The gPC represents second-order random processes in terms of an expansion of orthogonal polynomials of random variables and is used to represent the uncertain input(s) and unknown(s). We assume a single uncertain input-the total macroscopic cross section-although this does not represent a limitation of the approaches considered here. Two solution methods are examined: The Stochastic Finite Element Method (SFEM) and the Stochastic Collocation Method (SCM). The SFEM entails taking Galerkin projections onto the orthogonal basis, which, for fixed source problems, yields a linear system of fully -coupled equations for the PC coefficients of the unknown. For k-eigenvalue calculations, the SFEM system is non-linear and a Newton-Krylov method is employed to solve it. The SCM utilizes a suitable quadrature rule to compute the moments or PC coefficients of the unknown(s), thus the SCM solution involves a series of independent deterministic transport solutions. The accuracy and efficiency of the two methods are compared and contrasted. The PC coefficients are used to compute the moments and probability density functions of the unknown(s), which are shown to be accurate by comparing with Monte Carlo results. Our work demonstrates that stochastic spectral expansions are a viable alternative to sampling-based uncertainty quantification techniques since both provide a complete characterization of the distribution of the flux and the k-eigenvalue. Furthermore, it is demonstrated that, unlike perturbation methods, SFEM and SCM can handle large parameter uncertainty.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.
NASA Astrophysics Data System (ADS)
Shchurovskaya, M. V.; Alferov, V. P.; Geraskin, N. I.; Radaev, A. I.
2017-01-01
The results of the validation of a research reactor calculation using Monte Carlo and deterministic codes against experimental data and based on code-to-code comparison are presented. The continuous energy Monte Carlo code MCU-PTR and the nodal diffusion-based deterministic code TIGRIS were used for full 3-D calculation of the IRT MEPhI research reactor. The validation included the investigations for the reactor with existing high enriched uranium (HEU, 90 w/o) fuel and low enriched uranium (LEU, 19.7 w/o, U-9%Mo) fuel.
Lambeth, Malcolm David Dick
2001-02-27
A fuel injector comprises first and second housing parts, the first housing part being located within a bore or recess formed in the second housing part, the housing parts defining therebetween an inlet chamber, a delivery chamber axially spaced from the inlet chamber, and a filtration flow path interconnecting the inlet and delivery chambers to remove particulate contaminants from the flow of fuel therebetween.
A non-stochastic Coulomb collision algorithm for particle-in-cell methods
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacon, Luis
2016-10-01
Coulomb collision modules in PIC simulations are typically Monte-Carlo-based. Monte Carlo is attractive for its simplicity, efficiency in high dimensions, and conservation properties. However, it is noisy, of low temporal order (typically O(√{ Δt }), and has to resolve the collision frequency for accuracy. In this study, we explore a non-stochastic, multiscale alternative to Monte Carlo for PIC. The approach is based on a Green-function-based reformulation of the Vlasov-Fokker-Planck equation, which can be readily incorporated in modern multiscale collisionless PIC algorithms. An asymptotic-preserving operator splitting approach allows the collisional step to be treated independently from the particles while preserving the multiscale character of the method. A significant element of novelty in our algorithm is the use of a machine learning algorithm that avoid a velocity space mesh for the collision step. The resulting algorithm is non-stochastic and first-order-accurate in time. We will demonstrate the method with several relaxation examples.
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Stochastic dynamics of cholera epidemics.
Azaele, Sandro; Maritan, Amos; Bertuzzo, Enrico; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
2010-05-01
We describe the predictions of an analytically tractable stochastic model for cholera epidemics following a single initial outbreak. The exact model relies on a set of assumptions that may restrict the generality of the approach and yet provides a realm of powerful tools and results. Without resorting to the depletion of susceptible individuals, as usually assumed in deterministic susceptible-infected-recovered models, we show that a simple stochastic equation for the number of ill individuals provides a mechanism for the decay of the epidemics occurring on the typical time scale of seasonality. The model is shown to provide a reasonably accurate description of the empirical data of the 2000/2001 cholera epidemic which took place in the Kwa Zulu-Natal Province, South Africa, with possibly notable epidemiological implications.
Wavelet entropy of stochastic processes
NASA Astrophysics Data System (ADS)
Zunino, L.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.
2007-06-01
We compare two different definitions for the wavelet entropy associated to stochastic processes. The first one, the normalized total wavelet entropy (NTWS) family [S. Blanco, A. Figliola, R.Q. Quiroga, O.A. Rosso, E. Serrano, Time-frequency analysis of electroencephalogram series, III. Wavelet packets and information cost function, Phys. Rev. E 57 (1998) 932-940; O.A. Rosso, S. Blanco, J. Yordanova, V. Kolev, A. Figliola, M. Schürmann, E. Başar, Wavelet entropy: a new tool for analysis of short duration brain electrical signals, J. Neurosci. Method 105 (2001) 65-75] and a second introduced by Tavares and Lucena [Physica A 357(1) (2005) 71-78]. In order to understand their advantages and disadvantages, exact results obtained for fractional Gaussian noise ( -1<α< 1) and fractional Brownian motion ( 1<α< 3) are assessed. We find out that the NTWS family performs better as a characterization method for these stochastic processes.
Stochastic dynamics of cholera epidemics
NASA Astrophysics Data System (ADS)
Azaele, Sandro; Maritan, Amos; Bertuzzo, Enrico; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
2010-05-01
We describe the predictions of an analytically tractable stochastic model for cholera epidemics following a single initial outbreak. The exact model relies on a set of assumptions that may restrict the generality of the approach and yet provides a realm of powerful tools and results. Without resorting to the depletion of susceptible individuals, as usually assumed in deterministic susceptible-infected-recovered models, we show that a simple stochastic equation for the number of ill individuals provides a mechanism for the decay of the epidemics occurring on the typical time scale of seasonality. The model is shown to provide a reasonably accurate description of the empirical data of the 2000/2001 cholera epidemic which took place in the Kwa Zulu-Natal Province, South Africa, with possibly notable epidemiological implications.
Stochastic thermodynamics with information reservoirs
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Seifert, Udo
2014-10-01
We generalize stochastic thermodynamics to include information reservoirs. Such information reservoirs, which can be modeled as a sequence of bits, modify the second law. For example, work extraction from a system in contact with a single heat bath becomes possible if the system also interacts with an information reservoir. We obtain an inequality, and the corresponding fluctuation theorem, generalizing the standard entropy production of stochastic thermodynamics. From this inequality we can derive an information processing entropy production, which gives the second law in the presence of information reservoirs. We also develop a systematic linear response theory for information processing machines. For a unicyclic machine powered by an information reservoir, the efficiency at maximum power can deviate from the standard value of 1 /2 . For the case where energy is consumed to erase the tape, the efficiency at maximum erasure rate is found to be 1 /2 .
Stochastic approximation of dynamical exponent at quantum critical point
NASA Astrophysics Data System (ADS)
Yasuda, Shinya; Suwa, Hidemaro; Todo, Synge
2015-09-01
We have developed a unified finite-size scaling method for quantum phase transitions that requires no prior knowledge of the dynamical exponent z . During a quantum Monte Carlo simulation, the temperature is automatically tuned by the Robbins-Monro stochastic approximation method, being proportional to the lowest gap of the finite-size system. The dynamical exponent is estimated in a straightforward way from the system-size dependence of the temperature. As a demonstration of our novel method, the two-dimensional S =1 /2 quantum X Y model in uniform and staggered magnetic fields is investigated in the combination of the world-line quantum Monte Carlo worm algorithm. In the absence of a uniform magnetic field, we obtain the fully consistent result with the Lorentz invariance at the quantum critical point, z =1 , i.e., the three-dimensional classical X Y universality class. Under a finite uniform magnetic field, on the other hand, the dynamical exponent becomes two, and the mean-field universality with effective dimension (2 +2 ) governs the quantum phase transition.
Stochastic Approximation of Dynamical Exponent at Quantum Critical Point
NASA Astrophysics Data System (ADS)
Suwa, Hidemaro; Yasuda, Shinya; Todo, Synge
We have developed a unified finite-size scaling method for quantum phase transitions that requires no prior knowledge of the dynamical exponent z. During a quantum Monte Carlo simulation, the temperature is automatically tuned by the Robbins-Monro stochastic approximation method, being proportional to the lowest gap of the finite-size system. The dynamical exponent is estimated in a straightforward way from the system-size dependence of the temperature. As a demonstration of our novel method, the two-dimensional S = 1 / 2 quantum XY model, or equivalently the hard-core boson system, in uniform and staggered magnetic fields is investigated in the combination of the world-line quantum Monte Carlo worm algorithm. In the absence of a uniform magnetic field, we obtain the fully consistent result with the Lorentz invariance at the quantum critical point, z = 1 . Under a finite uniform magnetic field, on the other hand, the dynamical exponent becomes two, and the mean-field universality with effective dimension (2+2) governs the quantum phase transition. We will discuss also the system with random magnetic fields, or the dirty boson system, bearing a non-trivial dynamical exponent.Reference: S. Yasuda, H. Suwa, and S. Todo Phys. Rev. B 92, 104411 (2015); arXiv:1506.04837
Stochastic background of atmospheric cascades
Wilk, G. ); Wlodarczyk, Z. )
1993-06-15
Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions.
Stochastic Fluctuations in Gene Regulation
2005-04-01
AFRL-IF- RS -TR-2005-126 Final Technical Report April 2005 STOCHASTIC FLUCTUATIONS IN GENE REGULATION Boston University...be releasable to the general public, including foreign nations. AFRL-IF- RS -TR-2005-126 has been reviewed and is approved for publication...AGENCY REPORT NUMBER AFRL-IF- RS -TR-2005-126 11. SUPPLEMENTARY NOTES AFRL Project Engineer: Peter J. Costianes/IFED/(315) 330-4030
Stochastic resonance across bifurcation cascades
NASA Astrophysics Data System (ADS)
Nicolis, C.; Nicolis, G.
2017-03-01
The classical setting of stochastic resonance is extended to account for parameter variations leading to transitions between a unique stable state, bistability, and multistability regimes, across singularities of various kinds. Analytic expressions for the amplitude and the phase of the response in terms of key parameters are obtained. The conditions for optimal responses are derived in terms of the bifurcation parameter, the driving frequency, and the noise strength.
Optimality Functions in Stochastic Programming
2009-12-02
nonconvex. Non - convex stochastic optimization problems arise in such diverse applications as estimation of mixed logit models [2], engineering design...first- order necessary optimality conditions ; see for example Propositions 3.3.1 and 3.3.5 in [7] or Theorem 2.2.4 in [25]. If the evaluation of f j...procedures for validation analysis of a candidate point x ∈ IRn. Since P may be nonconvex, we focus on first-order necessary optimality conditions as
Stochastic cooling technology at Fermilab
NASA Astrophysics Data System (ADS)
Pasquinelli, Ralph J.
2004-10-01
The first antiproton cooling systems were installed and commissioned at Fermilab in 1984-1985. In the interim period, there have been several major upgrades, system improvements, and complete reincarnation of cooling systems. This paper will present some of the technology that was pioneered at Fermilab to implement stochastic cooling systems in both the Antiproton Source and Recycler accelerators. Current performance data will also be presented.
Stochastic Modeling Of Biochemical Reactions
2006-11-01
chemical reactions. Often for these reactions, the dynamics of the first M-order statistical moments of the species populations do not form a closed...results a stochastic model for gene expression is investigated. We show that in gene expression mechanisms , in which a protein inhibits its own...chemical reactions [7, 8, 4, 9, 10]. Since one is often interested in only the first and second order statistical moments for the number of molecules of
Turbulence, Spontaneous Stochasticity and Climate
NASA Astrophysics Data System (ADS)
Eyink, Gregory
Turbulence is well-recognized as important in the physics of climate. Turbulent mixing plays a crucial role in the global ocean circulation. Turbulence also provides a natural source of variability, which bedevils our ability to predict climate. I shall review here a recently discovered turbulence phenomenon, called ``spontaneous stochasticity'', which makes classical dynamical systems as intrinsically random as quantum mechanics. Turbulent dissipation and mixing of scalars (passive or active) is now understood to require Lagrangian spontaneous stochasticity, which can be expressed by an exact ``fluctuation-dissipation relation'' for scalar turbulence (joint work with Theo Drivas). Path-integral methods such as developed for quantum mechanics become necessary to the description. There can also be Eulerian spontaneous stochasticity of the flow fields themselves, which is intimately related to the work of Kraichnan and Leith on unpredictability of turbulent flows. This leads to problems similar to those encountered in quantum field theory. To quantify uncertainty in forecasts (or hindcasts), we can borrow from quantum field-theory the concept of ``effective actions'', which characterize climate averages by a variational principle and variances by functional derivatives. I discuss some work with Tom Haine (JHU) and Santha Akella (NASA-Goddard) to make this a practical predictive tool. More ambitious application of the effective action is possible using Rayleigh-Ritz schemes.
Mechanical Autonomous Stochastic Heat Engine.
Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara
2016-07-01
Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.
Multiple fields in stochastic inflation
Assadullahi, Hooshyar; Firouzjahi, Hassan; Noorbala, Mahdiyar; Vennin, Vincent; Wands, David
2016-06-24
Stochastic effects in multi-field inflationary scenarios are investigated. A hierarchy of diffusion equations is derived, the solutions of which yield moments of the numbers of inflationary e-folds. Solving the resulting partial differential equations in multi-dimensional field space is more challenging than the single-field case. A few tractable examples are discussed, which show that the number of fields is, in general, a critical parameter. When more than two fields are present for instance, the probability to explore arbitrarily large-field regions of the potential, otherwise inaccessible to single-field dynamics, becomes non-zero. In some configurations, this gives rise to an infinite mean number of e-folds, regardless of the initial conditions. Another difference with respect to single-field scenarios is that multi-field stochastic effects can be large even at sub-Planckian energy. This opens interesting new possibilities for probing quantum effects in inflationary dynamics, since the moments of the numbers of e-folds can be used to calculate the distribution of primordial density perturbations in the stochastic-δN formalism.
Mechanical Autonomous Stochastic Heat Engine
NASA Astrophysics Data System (ADS)
Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara
2016-07-01
Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.
Fuel cell-fuel cell hybrid system
Geisbrecht, Rodney A.; Williams, Mark C.
2003-09-23
A device for converting chemical energy to electricity is provided, the device comprising a high temperature fuel cell with the ability for partially oxidizing and completely reforming fuel, and a low temperature fuel cell juxtaposed to said high temperature fuel cell so as to utilize remaining reformed fuel from the high temperature fuel cell. Also provided is a method for producing electricity comprising directing fuel to a first fuel cell, completely oxidizing a first portion of the fuel and partially oxidizing a second portion of the fuel, directing the second fuel portion to a second fuel cell, allowing the first fuel cell to utilize the first portion of the fuel to produce electricity; and allowing the second fuel cell to utilize the second portion of the fuel to produce electricity.
Quantifying the Effect of Undersampling in Monte Carlo Simulations Using SCALE
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This study explores the effect of undersampling in Monte Carlo calculations on tally estimates and tally variance estimates for burnup credit applications. Steady-state Monte Carlo simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity and the impact of undersampling on eigenvalue and flux estimates was examined. Using an inadequate number of particle histories in each generation was found to produce an approximately 100 pcm bias in the eigenvalue estimates, and biases that exceeded 10% in fuel pin flux estimates.
Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer
NASA Astrophysics Data System (ADS)
Castonguay, Thomas C.; Wang, Feng
2008-03-01
In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.
NASA Astrophysics Data System (ADS)
Gelß, Patrick; Matera, Sebastian; Schütte, Christof
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.
Gelß, Patrick Matera, Sebastian Schütte, Christof
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.
GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA
NASA Astrophysics Data System (ADS)
Spiechowicz, J.; Kostur, M.; Machura, L.
2015-06-01
This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization
NASA Astrophysics Data System (ADS)
Subramani, Deepak N.; Lermusiaux, Pierre F. J.
2016-04-01
A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.
Fuels research: Fuel thermal stability overview
NASA Technical Reports Server (NTRS)
Cohen, S. M.
1980-01-01
Alternative fuels or crude supplies are examined with respect to satisfying aviation fuel needs for the next 50 years. The thermal stability of potential future fuels is discussed and the effects of these characteristics on aircraft fuel systems are examined. Advanced fuel system technology and design guidelines for future fuels with lower thermal stability are reported.
Network motif identification in stochastic networks
NASA Astrophysics Data System (ADS)
Jiang, Rui; Tu, Zhidong; Chen, Ting; Sun, Fengzhu
2006-06-01
Network motifs have been identified in a wide range of networks across many scientific disciplines and are suggested to be the basic building blocks of most complex networks. Nonetheless, many networks come with intrinsic and/or experimental uncertainties and should be treated as stochastic networks. The building blocks in these networks thus may also have stochastic properties. In this article, we study stochastic network motifs derived from families of mutually similar but not necessarily identical patterns of interconnections. We establish a finite mixture model for stochastic networks and develop an expectation-maximization algorithm for identifying stochastic network motifs. We apply this approach to the transcriptional regulatory networks of Escherichia coli and Saccharomyces cerevisiae, as well as the protein-protein interaction networks of seven species, and identify several stochastic network motifs that are consistent with current biological knowledge. expectation-maximization algorithm | mixture model | transcriptional regulatory network | protein-protein interaction network
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.
Arampatzis, Georgios; Katsoulakis, Markos A
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB
NASA Astrophysics Data System (ADS)
Müller, Florian; Jenny, Patrick; Daniel, Meyer
2014-05-01
To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary
Not Available
1989-02-01
This report discusses the Omnibus Trade and Competitiveness Act of 1988 which requires GAO to examine fuel ethanol imports from Central America and the Caribbean and their impact on the U.S. fuel ethanol industry. Ethanol is the alcohol in beverages, such as beer, wine, and whiskey. It can also be used as a fuel by blending with gasoline. It can be made from renewable resources, such as corn, wheat, grapes, and sugarcane, through a process of fermentation. This report finds that, given current sugar and gasoline prices, it is not economically feasible for Caribbean ethanol producers to meet the current local feedstock requirement.
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the
Stochastic Vorticity and Associated Filtering Theory
Amirdjanova, A.; Kallianpur, G.
2002-12-19
The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.
Applications of stochastic optimization, Task 4
1994-12-01
This report illustrates the power of the new stochastic optimization and stochastic programming capabilities developed around the ASPEN simulator in solving various types of design and analysis problems for advanced energy systems. A case study is presented for the Lurgi air-blown dry ash gasifier IGCC system. In addition the stochastic optimization capability can also be used for off-line quality control. The methodology is presented in the context of a simple gas turbine combustor flowsheet.
Stochastic Linear Quadratic Optimal Control Problems
Chen, S.; Yong, J.
2001-07-01
This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward-backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well.
Stochastic Blockmodels with Growing Number of Classes
2011-01-01
Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS stochastic blockmodel, logit model, network confidence bounds David...simulations verifying the conditions sufficient for our results, and conclude by fitting a logit parameterization of a stochastic blockmodel with...assignment. We provide simulations verifying the conditions sufficient for our results, and conclude by fitting a logit parameterization of a stochastic
Applying graphics processor units to Monte Carlo dose calculation in radiation therapy.
Bakhtiari, M; Malhotra, H; Jones, M D; Chaudhary, V; Walters, J P; Nazareth, D
2010-04-01
We investigate the potential in using of using a graphics processor unit (GPU) for Monte-Carlo (MC)-based radiation dose calculations. The percent depth dose (PDD) of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU's capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
NASA Astrophysics Data System (ADS)
Gu, Xudong; Zhu, Weiqiu
2014-04-01
A new stochastic averaging method for predicting the response of vibro-impact (VI) systems to random perturbations is proposed. First, the free VI system (without damping and random perturbation) is analyzed. The impact condition for the displacement is transformed to that for the system energy. Thus, the motion of the free VI systems is divided into periodic motion without impact and quasi-periodic motion with impact according to the level of system energy. The energy loss during each impact is found to be related to the restitution factor and the energy level before impact. Under the assumption of lightly damping and weakly random perturbation, the system energy is a slowly varying process and an averaged Itô stochastic differential equation for system energy can be derived. The drift and diffusion coefficients of the averaged Itô equation for system energy without impact are the functions of the damping and the random excitations, and those for system energy with impact are the functions of the damping, the random excitations and the impact energy loss. Finally, the averaged Fokker-Plank-Kolmogorov (FPK) equation associated with the averaged Itô equation is derived and solved to yield the stationary probability density of system energy. Numerical results for a nonlinear VI oscillator are obtained to illustrate the proposed stochastic averaging method. Monte-Carlo simulation (MCS) is also conducted to show that the proposed stochastic averaging method is quite effective.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
NASA Astrophysics Data System (ADS)
Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro
2016-07-01
Understanding of stochastic phenomena is essential to the development of a highly sensitive resist for nanofabrication. In this study, we investigated the stochastic effects in a chemically amplified resist consisting of poly(4-hydroxystyrene-co-t-butyl methacrylate), triphenylsulfonium nonafluorobutanesulfonate (acid generator), and tri-n-octylamine (quencher). Scanning electron microscopy (SEM) images of resist patterns were analyzed by Monte Carlo simulation on the basis of the sensitization and reaction mechanisms of chemically amplified extreme ultraviolet resists. It was estimated that a ±0.82σ fluctuation of the number of protected units per polymer molecule led to line edge roughness formation. Here, σ is the standard deviation of the number of protected units per polymer molecule after postexposure baking (PEB). The threshold for the elimination of stochastic bridge generation was 4.38σ (the difference between the average number of protected units after PEB and the dissolution point). The threshold for the elimination of stochastic pinching was 2.16σ.
NASA Astrophysics Data System (ADS)
Pivovarov, Dmytro; Steinmann, Paul
2016-12-01
In the current work we apply the stochastic version of the FEM to the homogenization of magneto-elastic heterogeneous materials with random microstructure. The main aim of this study is to capture accurately the discontinuities appearing at matrix-inclusion interfaces. We demonstrate and compare three different techniques proposed in the literature for the purely mechanical problem, i.e. global, local and enriched stochastic basis functions. Moreover, we demonstrate the implementation of the isoparametric concept in the enlarged physical-stochastic product space. The Gauss integration rule in this multidimensional space is discussed. In order to design a realistic stochastic Representative Volume Element we analyze actual scans obtained by electron microscopy and provide numerical studies of the micro particle distribution. The SFEM framework described in our previous work (Pivovarov and Steinmann in Comput Mech 57(1): 123-147, 2016) is extended to the case of the magneto-elastic materials. To this end, the magneto-elastic energy function is used, and the corresponding hyper-tensors of the magneto-elastic problem are introduced. In order to estimate the methods' accuracy we performed a set of simulations for elastic and magneto-elastic problems using three different SFEM modifications. All results are compared with "brute-force" Monte-Carlo simulations used as reference solution.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2015-09-01
Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.
Leinonen, Matti Hakula, Harri Hyvönen, Nuutti
2014-07-15
The aim of electrical impedance tomography is to determine the internal conductivity distribution of some physical body from boundary measurements of current and voltage. The most accurate forward model for impedance tomography is the complete electrode model, which consists of the conductivity equation coupled with boundary conditions that take into account the electrode shapes and the contact resistances at the corresponding interfaces. If the reconstruction task of impedance tomography is recast as a Bayesian inference problem, it is essential to be able to solve the complete electrode model forward problem with the conductivity and the contact resistances treated as a random field and random variables, respectively. In this work, we apply a stochastic Galerkin finite element method to the ensuing elliptic stochastic boundary value problem and compare the results with Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Guo, Kongming; Jiang, Jun; Xu, Yalan
2016-09-01
In this paper, a simple but accurate semi-analytical method to approximate probability density function of stochastic closed curve attractors is proposed. The expression of distribution applies to systems with strong nonlinearities, while only weak noise condition is needed. With the understanding that additive noise does not change the longitudinal distribution of the attractors, the high-dimensional probability density distribution is decomposed into two low-dimensional distributions: the longitudinal and the transverse probability density distributions. The longitudinal distribution can be calculated from the deterministic systems, while the probability density in the transverse direction of the curve can be approximated by the stochastic sensitivity function method. The effectiveness of this approach is verified by comparing the expression of distribution with the results of Monte Carlo numerical simulations in several planar systems.
Numerical methods for the stochastic Landau-Lifshitz Navier-Stokes equations.
Bell, John B; Garcia, Alejandro L; Williams, Sarah A
2007-07-01
The Landau-Lifshitz Navier-Stokes (LLNS) equations incorporate thermal fluctuations into macroscopic hydrodynamics by using stochastic fluxes. This paper examines explicit Eulerian discretizations of the full LLNS equations. Several computational fluid dynamics approaches are considered (including MacCormack's two-step Lax-Wendroff scheme and the piecewise parabolic method) and are found to give good results for the variance of momentum fluctuations. However, neither of these schemes accurately reproduces the fluctuations in energy or density. We introduce a conservative centered scheme with a third-order Runge-Kutta temporal integrator that does accurately produce fluctuations in density, energy, and momentum. A variety of numerical tests, including the random walk of a standing shock wave, are considered and results from the stochastic LLNS solver are compared with theory, when available, and with molecular simulations using a direct simulation Monte Carlo algorithm.
Stochastic modeling and vibration analysis of rotating beams considering geometric random fields
NASA Astrophysics Data System (ADS)
Choi, Chan Kyu; Yoo, Hong Hee
2017-02-01
Geometric parameters such as the thickness and width of a beam are random for various reasons including manufacturing tolerance and operation wear. Due to these random parameter properties, the vibration characteristics of the structure are also random. In this paper, we derive equations of motion to conduct stochastic vibration analysis of a rotating beam using the assumed mode method and stochastic spectral method. The accuracy of the proposed method is first verified by comparing analysis results to those obtained with Monte-Carlo simulation (MCS). The efficiency of the proposed method is then compared to that of MCS. Finally, probability densities of various modal and transient response characteristics of rotating beams are obtained with the proposed method.
Continuous Variable Teleportation Within Stochastic Electrodynamics
NASA Astrophysics Data System (ADS)
Carmichael, H. J.; Nha, Hyunchul
2004-12-01
Stochastic electrodynamics provides a local realistic interpretation of the continuous variable teleportation of coherent light. Time-domain simulations illustrate broadband features of the teleportation process.
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-01-01
Background The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. Results The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. Conclusion We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods. PMID:17524148
Disentangling the importance of ecological niches from stochastic processes across scales
Chase, Jonathan M.; Myers, Jonathan A.
2011-01-01
Deterministic theories in community ecology suggest that local, niche-based processes, such as environmental filtering, biotic interactions and interspecific trade-offs largely determine patterns of species diversity and composition. In contrast, more stochastic theories emphasize the importance of chance colonization, random extinction and ecological drift. The schisms between deterministic and stochastic perspectives, which date back to the earliest days of ecology, continue to fuel contemporary debates (e.g. niches versus neutrality). As illustrated by the pioneering studies of Robert H. MacArthur and co-workers, resolution to these debates requires consideration of how the importance of local processes changes across scales. Here, we develop a framework for disentangling the relative importance of deterministic and stochastic processes in generating site-to-site variation in species composition (β-diversity) along ecological gradients (disturbance, productivity and biotic interactions) and among biogeographic regions that differ in the size of the regional species pool. We illustrate how to discern the importance of deterministic processes using null-model approaches that explicitly account for local and regional factors that inherently create stochastic turnover. By embracing processes across scales, we can build a more synthetic framework for understanding how niches structure patterns of biodiversity in the face of stochastic processes that emerge from local and biogeographic factors. PMID:21768151
NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C
2010-12-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.
Single realization stochastic FDTD for weak scattering waves in biological random media.
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2013-02-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.
Stochastic simulation of charged particle transport on the massively parallel processor
NASA Technical Reports Server (NTRS)
Earl, James A.
1988-01-01
Computations of cosmic-ray transport based upon finite-difference methods are afflicted by instabilities, inaccuracies, and artifacts. To avoid these problems, researchers developed a Monte Carlo formulation which is closely related not only to the finite-difference formulation, but also to the underlying physics of transport phenomena. Implementations of this approach are currently running on the Massively Parallel Processor at Goddard Space Flight Center, whose enormous computing power overcomes the poor statistical accuracy that usually limits the use of stochastic methods. These simulations have progressed to a stage where they provide a useful and realistic picture of solar energetic particle propagation in interplanetary space.
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil; Abhyankar, S.; Ghosh, Donetta L.; Smith, Barry; Huang, Zhenyu; Tartakovsky, Alexandre M.
2015-09-22
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Hamilton's principle in stochastic mechanics
NASA Astrophysics Data System (ADS)
Pavon, Michele
1995-12-01
In this paper we establish three variational principles that provide new foundations for Nelson's stochastic mechanics in the case of nonrelativistic particles without spin. The resulting variational picture is much richer and of a different nature with respect to the one previously considered in the literature. We first develop two stochastic variational principles whose Hamilton-Jacobi-like equations are precisely the two coupled partial differential equations that are obtained from the Schrödinger equation (Madelung equations). The two problems are zero-sum, noncooperative, stochastic differential games that are familiar in the control theory literature. They are solved here by means of a new, absolutely elementary method based on Lagrange functionals. For both games the saddle-point equilibrium solution is given by the Nelson's process and the optimal controls for the two competing players are precisely Nelson's current velocity v and osmotic velocity u, respectively. The first variational principle includes as special cases both the Guerra-Morato variational principle [Phys. Rev. D 27, 1774 (1983)] and Schrödinger original variational derivation of the time-independent equation. It also reduces to the classical least action principle when the intensity of the underlying noise tends to zero. It appears as a saddle-point action principle. In the second variational principle the action is simply the difference between the initial and final configurational entropy. It is therefore a saddle-point entropy production principle. From the variational principles it follows, in particular, that both v(x,t) and u(x,t) are gradients of appropriate principal functions. In the variational principles, the role of the background noise has the intuitive meaning of attempting to contrast the more classical mechanical features of the system by trying to maximize the action in the first principle and by trying to increase the entropy in the second. Combining the two variational
Resolution for Stochastic Boolean Satisfiability
NASA Astrophysics Data System (ADS)
Teige, Tino; Fränzle, Martin
The stochastic Boolean satisfiability (SSAT) problem was introduced by Papadimitriou in 1985 by adding a probabilistic model of uncertainty to propositional satisfiability through randomized quantification. SSAT has many applications, e.g., in probabilistic planning and, more recently by integrating arithmetic, in probabilistic model checking. In this paper, we first present a new result on the computational complexity of SSAT: SSAT remains PSPACE-complete even for its restriction to 2CNF. Second, we propose a sound and complete resolution calculus for SSAT complementing the classical backtracking search algorithms.
Stochastic elimination of cancer cells.
Michor, Franziska; Nowak, Martin A; Frank, Steven A; Iwasa, Yoh
2003-01-01
Tissues of multicellular organisms consist of stem cells and differentiated cells. Stem cells divide to produce new stem cells or differentiated cells. Differentiated cells divide to produce new differentiated cells. We show that such a tissue design can reduce the rate of fixation of mutations that increase the net proliferation rate of cells. It has, however, no consequence for the rate of fixation of neutral mutations. We calculate the optimum relative abundance of stem cells that minimizes the rate of generating cancer cells. There is a critical fraction of stem cell divisions that is required for a stochastic elimination ('wash out') of cancer cells. PMID:14561289
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
Stochastic thermodynamics of information processing
NASA Astrophysics Data System (ADS)
Cardoso Barato, Andre
2015-03-01
We consider two recent advancements on theoretical aspects of thermodynamics of information processing. First we show that the theory of stochastic thermodynamics can be generalized to include information reservoirs. These reservoirs can be seen as a sequence of bits which has its Shannon entropy changed due to the interaction with the system. Second we discuss bipartite systems, which provide a convenient description of Maxwell's demon. Analyzing a special class of bipartite systems we show that they can be used to study cellular information processing, allowing for the definition of an entropic rate that quantifies how much a cell learns about a fluctuating external environment and that is bounded by the thermodynamic entropy production.
Bifurcation and Optimal Stochastic Control.
1982-03-01
as soon as luX InW w’(0) n L nis boundeI. To sir.iplity the notations, we denote by u = 1 . Without loss of n generality we may assume that c l...Stochastic Control. F O R M I II I • Il I i ,iii i, DD I JAP7 1473 EDITION OF I NOV S IS OSOLE’TE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE i(,en bot. EntereJ) DAT FILMEI DIC
Stochastic Gain in Population Dynamics
NASA Astrophysics Data System (ADS)
Traulsen, Arne; Röhl, Torsten; Schuster, Heinz Georg
2004-07-01
We introduce an extension of the usual replicator dynamics to adaptive learning rates. We show that a population with a dynamic learning rate can gain an increased average payoff in transient phases and can also exploit external noise, leading the system away from the Nash equilibrium, in a resonancelike fashion. The payoff versus noise curve resembles the signal to noise ratio curve in stochastic resonance. Seen in this broad context, we introduce another mechanism that exploits fluctuations in order to improve properties of the system. Such a mechanism could be of particular interest in economic systems.
Dynamically orthogonal field equations for stochastic flows and particle dynamics
2011-02-01
where uncertainty ‘lives’ as well as a system of Stochastic Di erential Equations that de nes how the uncertainty evolves in the time varying stochastic ... stochastic dynamical component that are both time and space dependent, we derive a system of field equations consisting of a Partial Differential Equation...a system of Stochastic Differential Equations that defines how the stochasticity evolves in the time varying stochastic subspace. These new
Accuracy of Monte Carlo Criticality Calculations During BR2 Operation
Kalcheva, Silva; Koonen, Edgar; Ponsard, Bernard
2005-08-15
The Belgian Material Test Reactor BR2 is a strongly heterogeneous high-flux engineering test reactor at SCK-CEN (Centre d'Etude de l'Energie Nucleaire) in Mol with a thermal power of 60 to 100 MW. It deploys highly enriched uranium, water-cooled concentric plate fuel elements, positioned inside a beryllium reflector with a complex hyperboloid arrangement of test holes. The objective of this paper is to validate the MCNP and ORIGEN-S three-dimensional (3-D) model for reactivity predictions of the entire BR2 core during reactor operation. We employ the Monte Carlo code MCNP-4C to evaluate the effective multiplication factor k{sub eff} and 3-D space-dependent specific power distribution. The one-dimensional code ORIGEN-S is used to calculate the isotopic fuel depletion versus burnup and to prepare a database with depleted fuel compositions. The approach taken is to evaluate the 3-D power distribution at each time step and along with the database to evaluate the 3-D isotopic fuel depletion at the next step and to deduce the corresponding shim rod positions of the reactor operation. The capabilities of both codes are fully exploited without constraints on the number of involved isotope depletion chains or an increase of the computational time. The reactor has a complex operation, with important shutdowns between cycles, and its reactivity is strongly influenced by poisons, mainly {sup 3}He and {sup 6}Li from the beryllium reflector, and the burnable absorbers {sup 149}Sm and {sup 10}B in the fresh UAl{sub x} fuel. The computational predictions for the shim rod positions at various restarts are within 0.5 $ ({beta}{sub eff} = 0.0072)
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
Xu Wu; Piyush Sabharwall; Jason Hales
2014-07-01
This report details the analysis of neutronics and fuel performance analysis for enhanced accident tolerance fuel, with Monte Carlo reactor physics code Serpent and INL’s fuel performance code BISON, respectively. The purpose is to evaluate two of the most promising candidate materials, FeCrAl and Silicon Carbide (SiC), as the fuel cladding under normal operating conditions. Substantial neutron penalty is identified when FeCrAl is used as monolithic cladding for current oxide fuel. From the reactor physics standpoint, application of the FeCrAl alloy as coating layer on surface of zircaloy cladding is possible without increasing fuel enrichment. Meanwhile, SiC brings extra reactivity and the neutron penalty is of no concern. Application of either FeCrAl or SiC could be favorable from the fuel performance standpoint. Detailed comparison between monolithic cladding and hybrid cladding (cladding + coating) is discussed. Hybrid cladding is more practical based on the economics evaluation during the transition from current UO2/zircaloy to Accident Tolerant Fuel (ATF) system. However, a few issues remain to be resolved, such as the creep behavior of FeCrAl, coating spallation, inter diffusion with zirconium, etc. For SiC, its high thermal conductivity, excellent creep resistance, low thermal neutron absorption cross section, irradiation stability (minimal swelling) make it an excellent candidate materials for future nuclear fuel/cladding system.
Barber, Jared; Tanase, Roxana; Yotov, Ivan
2016-06-01
Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion.
NASA Astrophysics Data System (ADS)
Radaev, A. I.; Schurovskaya, M. V.
2015-12-01
The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.
Radaev, A. I. Schurovskaya, M. V.
2015-12-15
The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
Stochastic Modeling of Laminar-Turbulent Transition
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Choudhari, Meelan
2002-01-01
Stochastic versions of stability equations are developed in order to develop integrated models of transition and turbulence and to understand the effects of uncertain initial conditions on disturbance growth. Stochastic forms of the resonant triad equations, a high Reynolds number asymptotic theory, and the parabolized stability equations are developed.
Variational principles for stochastic fluid dynamics
Holm, Darryl D.
2015-01-01
This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083
Variational principles for stochastic fluid dynamics.
Holm, Darryl D
2015-04-08
This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler-Boussinesq and quasi-geostropic approximations.
Stochastic and Coherence Resonance in Hippocampal Neurons
2007-11-02
decreases the signal to noise ratio of subthreshold synaptic inputs. Keywords - Hippocampus , neurons, stochastic resonance I. INTRODUCTION... subthreshold signals in the hippocampus ,” J. Neurophysiology , in press. [3] J. Collins C.C. Chow and T.T. Imboff, “Stochastic resonance without...nonlinear systems whereby the introduction of noise enhances the detection of subthreshold signals. Both computer simulations and experimental
Alternative jet aircraft fuels
NASA Technical Reports Server (NTRS)
Grobman, J.
1979-01-01
Potential changes in jet aircraft fuel specifications due to shifts in supply and quality of refinery feedstocks are discussed with emphasis on the effects these changes would have on the performance and durability of aircraft engines and fuel systems. Combustion characteristics, fuel thermal stability, and fuel pumpability at low temperature are among the factors considered. Combustor and fuel system technology needs for broad specification fuels are reviewed including prevention of fuel system fouling and fuel system technology for fuels with higher freezing points.
RHIC stochastic cooling motion control
Gassner, D.; DeSanto, L.; Olsen, R.H.; Fu, W.; Brennan, J.M.; Liaw, CJ; Bellavia, S.; Brodowski, J.
2011-03-28
Relativistic Heavy Ion Collider (RHIC) beams are subject to Intra-Beam Scattering (IBS) that causes an emittance growth in all three-phase space planes. The only way to increase integrated luminosity is to counteract IBS with cooling during RHIC stores. A stochastic cooling system for this purpose has been developed, it includes moveable pick-ups and kickers in the collider that require precise motion control mechanics, drives and controllers. Since these moving parts can limit the beam path aperture, accuracy and reliability is important. Servo, stepper, and DC motors are used to provide actuation solutions for position control. The choice of motion stage, drive motor type, and controls are based on needs defined by the variety of mechanical specifications, the unique performance requirements, and the special needs required for remote operations in an accelerator environment. In this report we will describe the remote motion control related beam line hardware, position transducers, rack electronics, and software developed for the RHIC stochastic cooling pick-ups and kickers.
Stochastic Methods for Aircraft Design
NASA Technical Reports Server (NTRS)
Pelz, Richard B.; Ogot, Madara
1998-01-01
The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.
Reforming of fuel inside fuel cell generator
Grimble, R.E.
1988-03-08
Disclosed is an improved method of reforming a gaseous reformable fuel within a solid oxide fuel cell generator, wherein the solid oxide fuel cell generator has a plurality of individual fuel cells in a refractory container, the fuel cells generating a partially spent fuel stream and a partially spent oxidant stream. The partially spent fuel stream is divided into two streams, spent fuel stream 1 and spent fuel stream 2. Spent fuel stream 1 is burned with the partially spent oxidant stream inside the refractory container to produce an exhaust stream. The exhaust stream is divided into two streams, exhaust stream 1 and exhaust stream 2, and exhaust stream 1 is vented. Exhaust stream 2 is mixed with spent fuel stream 2 to form a recycle stream. The recycle stream is mixed with the gaseous reformable fuel within the refractory container to form a fuel stream which is supplied to the fuel cells. Also disclosed is an improved apparatus which permits the reforming of a reformable gaseous fuel within such a solid oxide fuel cell generator. The apparatus comprises a mixing chamber within the refractory container, means for diverting a portion of the partially spent fuel stream to the mixing chamber, means for diverting a portion of exhaust gas to the mixing chamber where it is mixed with the portion of the partially spent fuel stream to form a recycle stream, means for injecting the reformable gaseous fuel into the recycle stream, and means for circulating the recycle stream back to the fuel cells. 1 fig.
Reforming of fuel inside fuel cell generator
Grimble, Ralph E.
1988-01-01
Disclosed is an improved method of reforming a gaseous reformable fuel within a solid oxide fuel cell generator, wherein the solid oxide fuel cell generator has a plurality of individual fuel cells in a refractory container, the fuel cells generating a partially spent fuel stream and a partially spent oxidant stream. The partially spent fuel stream is divided into two streams, spent fuel stream I and spent fuel stream II. Spent fuel stream I is burned with the partially spent oxidant stream inside the refractory container to produce an exhaust stream. The exhaust stream is divided into two streams, exhaust stream I and exhaust stream II, and exhaust stream I is vented. Exhaust stream II is mixed with spent fuel stream II to form a recycle stream. The recycle stream is mixed with the gaseous reformable fuel within the refractory container to form a fuel stream which is supplied to the fuel cells. Also disclosed is an improved apparatus which permits the reforming of a reformable gaseous fuel within such a solid oxide fuel cell generator. The apparatus comprises a mixing chamber within the refractory container, means for diverting a portion of the partially spent fuel stream to the mixing chamber, means for diverting a portion of exhaust gas to the mixing chamber where it is mixed with the portion of the partially spent fuel stream to form a recycle stream, means for injecting the reformable gaseous fuel into the recycle stream, and means for circulating the recycle stream back to the fuel cells.
Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core
NASA Astrophysics Data System (ADS)
Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier
2014-06-01
As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard
Fortescue, P.; Zumwalt, L.R.
1961-11-28
A fuel element was developed for a gas cooled nuclear reactor. The element is constructed in the form of a compacted fuel slug including carbides of fissionable material in some cases with a breeder material carbide and a moderator which slug is disposed in a canning jacket of relatively impermeable moderator material. Such canned fuel slugs are disposed in an elongated shell of moderator having greater gas permeability than the canning material wherefore application of reduced pressure to the space therebetween causes gas diffusing through the exterior shell to sweep fission products from the system. Integral fission product traps and/or exterior traps as well as a fission product monitoring system may be employed therewith. (AEC)
Lui, C.K.
1989-04-04
This patent describes a method of forming a fuel bundle of a nuclear reactor. The method consists of positioning the fuel rods in the bottom plate, positioning the tie rod in the bottom plate with the key passed through the receptacle to the underside of the bottom plate and, after the tie rod is so positioned, turning the tie rod so that the key is in engagement with the underside of the bottom plate. Thereafter mounting the top plate is mounted in engagement with the fuel rods with the upper end of the tie rod extending through the opening in the top plate and extending above the top plate, and the tie rod is secured to the upper side of sid top plate thus simultaneously securing the key to the underside of the bottom plate.
Stochastic Optimal Control for Series Hybrid Electric Vehicles
Malikopoulos, Andreas
2013-01-01
Increasing demand for improving fuel economy and reducing emissions has stimulated significant research and investment in hybrid propulsion systems. In this paper, we address the problem of optimizing online the supervisory control in a series hybrid configuration by modeling its operation as a controlled Markov chain using the average cost criterion. We treat the stochastic optimal control problem as a dual constrained optimization problem. We show that the control policy that yields higher probability distribution to the states with low cost and lower probability distribution to the states with high cost is an optimal control policy, defined as an equilibrium control policy. We demonstrate the effectiveness of the efficiency of the proposed controller in a series hybrid configuration and compare it with a thermostat-type controller.
Howard, R.C.; Bokros, J.C.
1962-03-01
A fueled matrlx eontnwinlng uncomblned carbon is deslgned for use in graphlte-moderated gas-cooled reactors designed for operatlon at temperatures (about 1500 deg F) at which conventional metallic cladding would ordlnarily undergo undesired carburization or physical degeneratlon. - The invention comprlses, broadly a fuel body containlng uncombined earbon, clad with a nickel alloy contalning over about 28 percent by' weight copper in the preferred embodlment. Thls element ls supporirted in the passageways in close tolerance with the walls of unclad graphite moderator materlal. (AEC)
Quantum Monte Carlo Algorithms for Diagrammatic Vibrational Structure Calculations
NASA Astrophysics Data System (ADS)
Hermes, Matthew; Hirata, So
2015-06-01
Convergent hierarchies of theories for calculating many-body vibrational ground and excited-state wave functions, such as Møller-Plesset perturbation theory or coupled cluster theory, tend to rely on matrix-algebraic manipulations of large, high-dimensional arrays of anharmonic force constants, tasks which require large amounts of computer storage space and which are very difficult to implement in a parallel-scalable fashion. On the other hand, existing quantum Monte Carlo (QMC) methods for vibrational wave functions tend to lack robust techniques for obtaining excited-state energies, especially for large systems. By exploiting analytical identities for matrix elements of position operators in a harmonic oscillator basis, we have developed stochastic implementations of the size-extensive vibrational self-consistent field (MC-XVSCF) and size-extensive vibrational Møller-Plesset second-order perturbation (MC-XVMP2) theories which do not require storing the potential energy surface (PES). The programmable equations of MC-XVSCF and MC-XVMP2 take the form of a small number of high-dimensional integrals evaluated using Metropolis Monte Carlo techniques. The associated integrands require independent evaluations of only the value, not the derivatives, of the PES at many points, a task which is trivial to parallelize. However, unlike existing vibrational QMC methods, MC-XVSCF and MC-XVMP2 can calculate anharmonic frequencies directly, rather than as a small difference between two noisy total energies, and do not require user-selected coordinates or nodal surfaces. MC-XVSCF and MC-XVMP2 can also directly sample the PES in a given approximation without analytical or grid-based approximations, enabling us to quantify the errors induced by such approximations.
Thomas, Philipp; Matuschek, Hannes; Grima, Ramon
2012-01-01
The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen's system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA's performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with
Structural mapping of Maxwell Montes
NASA Technical Reports Server (NTRS)
Keep, Myra; Hansen, Vicki L.
1993-01-01
Four sets of structures were mapped in the western and southern portions of Maxwell Montes. An early north-trending set of penetrative lineaments is cut by dominant, spaced ridges and paired valleys that trend northwest. To the south the ridges and valleys splay and graben form in the valleys. The spaced ridges and graben are cut by northeast-trending graben. The northwest-trending graben formed synchronously with or slightly later than the spaced ridges. Formation of the northeast-trending graben may have overlapped with that of the northwest-trending graben, but occurred in a spatially distinct area (regions of 2 deg slope). Graben formation, with northwest-southeast extension, may be related to gravity-sliding. Individually and collectively these structures are too small to support the immense topography of Maxwell, and are interpreted as parasitic features above a larger mass that supports the mountain belt.
MOX LTA Fuel Cycle Analyses: Nuclear and Radiation Safety
Pavlovitchev, A.M.
2001-09-28
Tasks of nuclear safety assurance for storage and transport of fresh mixed uranium-plutonium fuel of the VVER-1000 reactor are considered in the view of 3 MOX LTAs introduction into the core. The precise code MCU that realizes the Monte Carlo method is used for calculations.
Challenges of Monte Carlo Transport
Long, Alex Roberts
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick
2015-08-15
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
Second Cancers After Fractionated Radiotherapy: Stochastic Population Dynamics Effects
NASA Technical Reports Server (NTRS)
Sachs, Rainer K.; Shuryak, Igor; Brenner, David; Fakir, Hatim; Hahnfeldt, Philip
2007-01-01
When ionizing radiation is used in cancer therapy it can induce second cancers in nearby organs. Mainly due to longer patient survival times, these second cancers have become of increasing concern. Estimating the risk of solid second cancers involves modeling: because of long latency times, available data is usually for older, obsolescent treatment regimens. Moreover, modeling second cancers gives unique insights into human carcinogenesis, since the therapy involves administering well characterized doses of a well studied carcinogen, followed by long-term monitoring. In addition to putative radiation initiation that produces pre-malignant cells, inactivation (i.e. cell killing), and subsequent cell repopulation by proliferation can be important at the doses relevant to second cancer situations. A recent initiation/inactivation/proliferation (IIP) model characterized quantitatively the observed occurrence of second breast and lung cancers, using a deterministic cell population dynamics approach. To analyze ifradiation-initiated pre-malignant clones become extinct before full repopulation can occur, we here give a stochastic version of this I I model. Combining Monte Carlo simulations with standard solutions for time-inhomogeneous birth-death equations, we show that repeated cycles of inactivation and repopulation, as occur during fractionated radiation therapy, can lead to distributions of pre-malignant cells per patient with variance >> mean, even when pre-malignant clones are Poisson-distributed. Thus fewer patients would be affected, but with a higher probability, than a deterministic model, tracking average pre-malignant cell numbers, would predict. Our results are applied to data on breast cancers after radiotherapy for Hodgkin disease. The stochastic IIP analysis, unlike the deterministic one, indicates: a) initiated, pre-malignant cells can have a growth advantage during repopulation, not just during the longer tumor latency period that follows; b) weekend
Rare events in stochastic populations under bursty reproduction
NASA Astrophysics Data System (ADS)
Be'er, Shay; Assaf, Michael
2016-11-01
Recently, a first step was made by the authors towards a systematic investigation of the effect of reaction-step-size noise—uncertainty in the step size of the reaction—on the dynamics of stochastic populations. This was done by investigating the effect of bursty influx on the switching dynamics of stochastic populations. Here we extend this formalism to account for bursty reproduction processes, and improve the accuracy of the formalism to include subleading-order corrections. Bursty reproduction appears in various contexts, where notable examples include bursty viral production from infected cells, and reproduction of mammals involving varying number of offspring. The main question we quantitatively address is how bursty reproduction affects the overall fate of the population. We consider two complementary scenarios: population extinction and population survival; in the former a population gets extinct after maintaining a long-lived metastable state, whereas in the latter a population proliferates despite undergoing a deterministic drift towards extinction. In both models reproduction occurs in bursts, sampled from an arbitrary distribution. Using the WKB approach, we show in the extinction problem that bursty reproduction broadens the quasi-stationary distribution of population sizes in the metastable state, which results in a drastic reduction of the mean time to extinction compared to the non-bursty case. In the survival problem, it is shown that bursty reproduction drastically increases the survival probability of the population. Close to the bifurcation limit our analytical results simplify considerably and are shown to depend solely on the mean and variance of the burst-size distribution. Our formalism is demonstrated on several realistic distributions which all compare well with numerical Monte-Carlo simulations.
Stochastic thermodynamics for active matter
NASA Astrophysics Data System (ADS)
Speck, Thomas
2016-05-01
The theoretical understanding of active matter, which is driven out of equilibrium by directed motion, is still fragmental and model oriented. Stochastic thermodynamics, on the other hand, is a comprehensive theoretical framework for driven systems that allows to define fluctuating work and heat. We apply these definitions to active matter, assuming that dissipation can be modelled by effective non-conservative forces. We show that, through the work, conjugate extensive and intensive observables can be defined even in non-equilibrium steady states lacking a free energy. As an illustration, we derive the expressions for the pressure and interfacial tension of active Brownian particles. The latter becomes negative despite the observed stable phase separation. We discuss this apparent contradiction, highlighting the role of fluctuations, and we offer a tentative explanation.
Stochastic sensing through covalent interactions
Bayley, Hagan; Shin, Seong-Ho; Luchian, Tudor; Cheley, Stephen
2013-03-26
A system and method for stochastic sensing in which the analyte covalently bonds to the sensor element or an adaptor element. If such bonding is irreversible, the bond may be broken by a chemical reagent. The sensor element may be a protein, such as the engineered P.sub.SH type or .alpha.HL protein pore. The analyte may be any reactive analyte, including chemical weapons, environmental toxins and pharmaceuticals. The analyte covalently bonds to the sensor element to produce a detectable signal. Possible signals include change in electrical current, change in force, and change in fluorescence. Detection of the signal allows identification of the analyte and determination of its concentration in a sample solution. Multiple analytes present in the same solution may be detected.
Thermodynamics of stochastic Turing machines.
Strasberg, Philipp; Cerrillo, Javier; Schaller, Gernot; Brandes, Tobias
2015-10-01
In analogy to Brownian computers we explicitly show how to construct stochastic models which mimic the behavior of a general-purpose computer (a Turing machine). Our models are discrete state systems obeying a Markovian master equation, which are logically reversible and have a well-defined and consistent thermodynamic interpretation. The resulting master equation, which describes a simple one-step process on an enormously large state space, allows us to thoroughly investigate the thermodynamics of computation for this situation. Especially in the stationary regime we can well approximate the master equation by a simple Fokker-Planck equation in one dimension. We then show that the entropy production rate at steady state can be made arbitrarily small, but the total (integrated) entropy production is finite and grows logarithmically with the number of computational steps.
Stochastic hyperfine interactions modeling library
NASA Astrophysics Data System (ADS)
Zacate, Matthew O.; Evenson, William E.
2011-04-01
The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized; however, there was a need to develop supplementary code to find an orthonormal set of (left and right) eigenvectors of complex, non-Hermitian matrices. In addition, example code is provided to illustrate the use of SHIML to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A can be neglected. Program summaryProgram title: SHIML Catalogue identifier: AEIF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 3 No. of lines in distributed program, including test data, etc.: 8224 No. of bytes in distributed program, including test data, etc.: 312 348 Distribution format: tar.gz Programming language: C Computer: Any Operating system: LINUX, OS X RAM: Varies Classification: 7.4 External routines: TAPP [1], BLAS [2], a C-interface to BLAS [3], and LAPACK [4] Nature of problem: In condensed matter systems, hyperfine methods such as nuclear magnetic resonance (NMR), Mössbauer effect (ME), muon spin rotation (μSR), and perturbed angular correlation spectroscopy (PAC) measure electronic and magnetic structure within Angstroms of nuclear probes through the hyperfine interaction. When
Heuristic-biased stochastic sampling
Bresina, J.L.
1996-12-31
This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.
Multiscale Stochastic Simulation and Modeling
James Glimm; Xiaolin Li
2006-01-10
Acceleration driven instabilities of fluid mixing layers include the classical cases of Rayleigh-Taylor instability, driven by a steady acceleration and Richtmyer-Meshkov instability, driven by an impulsive acceleration. Our program starts with high resolution methods of numerical simulation of two (or more) distinct fluids, continues with analytic analysis of these solutions, and the derivation of averaged equations. A striking achievement has been the systematic agreement we obtained between simulation and experiment by using a high resolution numerical method and improved physical modeling, with surface tension. Our study is accompanies by analysis using stochastic modeling and averaged equations for the multiphase problem. We have quantified the error and uncertainty using statistical modeling methods.
Robust stochastic mine production scheduling
NASA Astrophysics Data System (ADS)
Kumral, Mustafa
2010-06-01
The production scheduling of open pit mines aims to determine the extraction sequence of blocks such that the net present value (NPV) of a mining project is maximized under capacity and access constraints. This sequencing has significant effect on the profitability of the mining venture. However, given that the values of coefficients in the optimization procedure are obtained in a medium of sparse data and unknown future events, implementations based on deterministic models may lead to destructive consequences to the company. In this article, a robust stochastic optimization (RSO) approach is used to deal with mine production scheduling in a manner such that the solution is insensitive to changes in input data. The approach seeks a trade off between optimality and feasibility. The model is demonstrated on a case study. The findings showed that the approach can be used in mine production scheduling problems efficiently.
Stochastic resonance in attention control
NASA Astrophysics Data System (ADS)
Kitajo, K.; Yamanaka, K.; Ward, L. M.; Yamamoto, Y.
2006-12-01
We investigated the beneficial role of noise in a human higher brain function, namely visual attention control. We asked subjects to detect a weak gray-level target inside a marker box either in the left or the right visual field. Signal detection performance was optimized by presenting a low level of randomly flickering gray-level noise between and outside the two possible target locations. Further, we found that an increase in eye movement (saccade) rate helped to compensate for the usual deterioration in detection performance at higher noise levels. To our knowledge, this is the first experimental evidence that noise can optimize a higher brain function which involves distinct brain regions above the level of primary sensory systems -- switching behavior between multi-stable attention states -- via the mechanism of stochastic resonance.
ERIC Educational Resources Information Center
Nash, J. Thomas
1983-01-01
Trends in and factors related to the nuclear industry and nuclear fuel production are discussed. Topics addressed include nuclear reactors, survival of the U.S. uranium industry, production costs, budget cuts by the Department of Energy and U.S. Geological survey for resource studies, mining, and research/development activities. (JN)
ERIC Educational Resources Information Center
Hawkins, M. D.
1973-01-01
Discusses the theories, construction, operation, types, and advantages of fuel cells developed by the American space programs. Indicates that the cell is an ideal small-scale power source characterized by its compactness, high efficiency, reliability, and freedom from polluting fumes. (CC)
ERIC Educational Resources Information Center
Stover, Del
1991-01-01
Tough new environmental laws, coupled with fluctuating oil prices, are likely to prompt hundreds of school systems to examine alternative fuels. Literature reviews and interviews with 45 government, education, and industry officials provided data for a comparative analysis of gasoline, diesel, natural gas, methanol, and propane. (MLF)
2009-06-11
Swedish Biofuels AB • Cellulosic and algal feedstocks that are non-competitive with food material $ P r o d u c t P r o d u c t Traditional fuels...JP-8 BACK-UP SLIDES Unclassified 19 What Are Biofuels ? Cellulose “first generation”“second generation” C18:0 C16:1 Triglycerides (fats, oils
Large scale stochastic spatio-temporal modelling with PCRaster
NASA Astrophysics Data System (ADS)
Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.
2013-04-01
PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model builders as Python functions. The software comes with Python framework classes providing control flow for spatio-temporal modelling, Monte Carlo simulation, and data assimilation (Ensemble Kalman Filter and Particle Filter). Models are built by combining the spatial operations in these framework classes. This approach enables modellers without specialist programming experience to construct large, rather complicated models, as many technical details of modelling (e.g., data storage, solving spatial operations, data assimilation algorithms) are taken care of by the PCRaster toolbox. Exploratory modelling is supported by routines for prompt, interactive visualisation of stochastic spatio-temporal data generated by the models. The high computational requirements for stochastic spatio-temporal modelling, and an increasing demand to run models over large areas at high resolution, e.g. in global hydrological modelling, require an optimal use of available, heterogeneous computing resources by the modelling framework. Current work in the context of the eWaterCycle project is on a parallel implementation of the modelling engine, capable of running on a high-performance computing infrastructure such as clusters and supercomputers. Model runs will be distributed over multiple compute nodes and multiple processors (GPUs and CPUs). Parallelization will be done by parallel execution of Monte Carlo realizations and sub regions of the modelling domain. In our approach we use multiple levels of parallelism, improving scalability considerably. On the node level we will use OpenCL, the industry standard for low-level high performance computing kernels. To combine multiple nodes we will use
Multiple Stochastic Point Processes in Gene Expression
NASA Astrophysics Data System (ADS)
Murugan, Rajamanickam
2008-04-01
We generalize the idea of multiple-stochasticity in chemical reaction systems to gene expression. Using Chemical Langevin Equation approach we investigate how this multiple-stochasticity can influence the overall molecular number fluctuations. We show that the main sources of this multiple-stochasticity in gene expression could be the randomness in transcription and translation initiation times which in turn originates from the underlying bio-macromolecular recognition processes such as the site-specific DNA-protein interactions and therefore can be internally regulated by the supra-molecular structural factors such as the condensation/super-coiling of DNA. Our theory predicts that (1) in case of gene expression system, the variances ( φ) introduced by the randomness in transcription and translation initiation-times approximately scales with the degree of condensation ( s) of DNA or mRNA as φ ∝ s -6. From the theoretical analysis of the Fano factor as well as coefficient of variation associated with the protein number fluctuations we predict that (2) unlike the singly-stochastic case where the Fano factor has been shown to be a monotonous function of translation rate, in case of multiple-stochastic gene expression the Fano factor is a turn over function with a definite minimum. This in turn suggests that the multiple-stochastic processes can also be well tuned to behave like a singly-stochastic point processes by adjusting the rate parameters.
Parallelization of KENO-Va Monte Carlo code
NASA Astrophysics Data System (ADS)
Ramón, Javier; Peña, Jorge
1995-07-01
KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.
Stochastic system identification in structural dynamics
Safak, Erdal
1988-01-01
Recently, new identification methods have been developed by using the concept of optimal-recursive filtering and stochastic approximation. These methods, known as stochastic identification, are based on the statistical properties of the signal and noise, and do not require the assumptions of current methods. The criterion for stochastic system identification is that the difference between the recorded output and the output from the identified system (i.e., the residual of the identification) should be equal to white noise. In this paper, first a brief review of the theory is given. Then, an application of the method is presented by using ambient vibration data from a nine-story building.
Permanence of Stochastic Lotka-Volterra Systems
NASA Astrophysics Data System (ADS)
Liu, Meng; Fan, Meng
2017-04-01
This paper proposes a new definition of permanence for stochastic population models, which overcomes some limitations and deficiency of the existing ones. Then, we explore the permanence of two-dimensional stochastic Lotka-Volterra systems in a general setting, which models several different interactions between two species such as cooperation, competition, and predation. Sharp sufficient criteria are established with the help of the Lyapunov direct method and some new techniques. This study reveals that the stochastic noises play an essential role in the permanence and characterize the systems being permanent or not.
A stochastic subgrid model for sheared turbulence
NASA Astrophysics Data System (ADS)
Bertoglio, J. P.
A new subgrid model for homogeneous turbulence is proposed. The model is used in a method of Large Eddy Simulation coupled with an E.D.Q.N.M. prediction of the statistical properties of the small scales. The model is stochastic in order to allow a 'disaveraging' of the informations provided by the E.D.Q.N.M. closure. It is based on stochastic amplitude equations for two-point closures. It allows backflow of energy from the small scales, introduces stochasticity into L.E.S., and is well adapted to nonisotropic fields. A few results are presented here.
Connecting deterministic and stochastic metapopulation models.
Barbour, A D; McVinish, R; Pollett, P K
2015-12-01
In this paper, we study the relationship between certain stochastic and deterministic versions of Hanski's incidence function model and the spatially realistic Levins model. We show that the stochastic version can be well approximated in a certain sense by the deterministic version when the number of habitat patches is large, provided that the presence or absence of individuals in a given patch is influenced by a large number of other patches. Explicit bounds on the deviation between the stochastic and deterministic models are given.
Large Deviations for Stochastic Flows of Diffeomorphisms
2007-01-01
be the unique solution of the ordinary differential equation ∂ηs,t(x) ∂t .= b ( ηs,t(x), t ) , ηs,s(x) = x, 0 ≤ s ≤ t ≤ 1. (5.2) Then it follows that...solving finite dimensional Itô stochastic differential equations . More precisely, suppose b, fi, i = 1, . . . ,m are functions from Rd × [0, T ] to Rd...s, T ]. This stochastic process is called the solution of Itô’s stochastic differential equation based on the Brownian motion F . From [15, Theorem
Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.
2015-09-15
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Laborde, S.; Calvi, A.
2012-10-01
This article describes some results of the study "DYNAMITED". The study is funded by the European Space Agency (ESA) and performed by a consortium of European industries and university, led by EADS Astrium Satellites. One of the main objectives of the study is to assess and quantify the uncertainty in the spacecraft sine vibration test data. For a number of reasons as for example robustness and confidence in the notching of the input spectra and validation of the finite element model, it is important to study the effect of the sources of uncertainty on the test data including the frequency response functions and the modal parameters. In particular the paper provides an overview on the estimation of the scatter on the spacecraft dynamic response due to identified sources of test uncertainties and the calculation of a "notched" sine test input spectrum based on a stochastic methodology. By means of Monte Carlo simulation, a stochastic cloud of the output of interest can be generated and this provides an estimate of the global error on the test results. The cloud is generated by characterizing the assumed sources of test uncertainties by parameters of the structure finite element model and by quantifying the scatter of the parameters. The uncertain parameters are the input random variables of the Monte Carlo simulation. Some results on the application of the methods to telecom spacecraft sine vibration tests are illustrated.
NASA Astrophysics Data System (ADS)
Vidal-Codina, F.; Nguyen, N. C.; Giles, M. B.; Peraire, J.
2015-09-01
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.
An efficient distribution method for nonlinear transport problems in stochastic porous media
NASA Astrophysics Data System (ADS)
Ibrahima, F.; Tchelepi, H.; Meyer, D. W.
2015-12-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.
Determination of IRT-2M fuel burnup by gamma spectrometry.
Koleška, Michal; Viererbl, Ladislav; Marek, Milan; Ernest, Jaroslav; Šunka, Michal; Vinš, Miroslav
2016-01-01
A spectrometric system was developed for evaluating spent fuel in the LVR-15 research reactor, which employs highly enriched (36%) IRT-2M-type fuel. Such system allows the measurement of detailed fission product profiles. Within these measurements, nuclides such as (137)Cs, (134)Cs, (144)Ce, (106)Ru and (154)Eu may be detected in fuel assemblies with different cooling times varying between 1.67 and 7.53 years. Burnup calculations using the MCNPX Monte Carlo code data showed good agreement with measurements, though some discrepancies were observed in certain regions. These discrepancies are attributed to the evaluation of irradiation history, reactor regulation pattern and buildup schemes.
A deterministic alternative to the full configuration interaction quantum Monte Carlo method.
Tubman, Norm M; Lee, Joonho; Takeshita, Tyler Y; Head-Gordon, Martin; Whaley, K Birgitta
2016-07-28
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2.
A deterministic alternative to the full configuration interaction quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Tubman, Norm M.; Lee, Joonho; Takeshita, Tyler Y.; Head-Gordon, Martin; Whaley, K. Birgitta
2016-07-01
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2.
Stochastic modeling of deterioration in nuclear power plant components
NASA Astrophysics Data System (ADS)
Yuan, Xianxun
2007-12-01
heterogeneity of individual units and additive measurement errors. Another common way to model deterioration in civil engineering is to treat the rate of deterioration as a random variable. In the context of condition-based maintenance, the thesis shows that the random variable rate (RV) model is inadequate to incorporate temporal variability, because the deterioration along a specific sample path becomes deterministic. This distinction between the RV and GP models has profound implications to the optimization of maintenance strategies. The thesis presents detailed practical applications of the proposed models to feeder pipe systems and fuel channels in CANDU nuclear reactors. In summary, a careful consideration of the nature of uncertainties associated with deterioration is important for credible life-cycle management of engineering systems. If the deterioration process is affected by temporal uncertainty, it is important to model it as a stochastic process.
Calibration, characterisation and Monte Carlo modelling of a fast-UNCL
NASA Astrophysics Data System (ADS)
Tagziria, Hamid; Bagi, Janos; Peerani, Paolo; Belian, Antony
2012-09-01
This paper describes the calibration, characterisation and Monte Carlo modelling of a new IAEA Uranium Neutron Collar (UNCL) for LWR fuel, which can be operated in both passive and active modes. It can employ either 35 3He tubes (in active configuration) or 44 tubes at 10 atm pressure (in its passive configuration) and thus can be operated in fast mode (with Cd liner) as its efficiency is higher than that of the standard UNCL. Furthermore, it has an adjustable internal cavity which allows the measurement of varying sizes of fuel assemblies such as WWER, PWR and BWR. It is intended to be used with Cd liners in active mode (with an AmLi interrogation source in place) by the inspectorate for the determination of the 235U content in fresh fuel assemblies, especially in cases where high concentrations of burnable poisons cause problems with accurate assays. A campaign of measurements has been carried out at the JRC Performance Laboratories (PERLA) in Ispra (Italy) using various radionuclide neutron sources (252Cf, 241AmLi and PuGa) and our BWR and PWR reference assemblies, in order to calibrate and characterise the counter as well as assess its performance and determine its optimum operational parameters. Furthermore, the fast-UNCL has been extensively modelled at JRC using the Monte Carlo code, MCNP-PTA, which simulates both the neutron transport and the coincidence electronics. The model has been validated using our measurements which agreed well with calculations. The WWER1000 fuel assembly for which there are no representative reference materials for an adequate calibration of the counter, has also been modelled and the response of the counter to this fuel assembly has been simulated. Subsequently numerical calibrations curves have been obtained for the above fuel assemblies in various modes (fast and thermal). The sensitivity of the counter to fuel rods substitution as well as other important aspects and the parameters of the fast-UNCL performance have been
Stochastic Satbility and Performance Robustness of Linear Multivariable Systems
NASA Technical Reports Server (NTRS)
Ryan, Laurie E.; Stengel, Robert F.
1990-01-01
Stochastic robustness, a simple technique used to estimate the robustness of linear, time invariant systems, is applied to a single-link robot arm control system. Concepts behind stochastic stability robustness are extended to systems with estimators and to stochastic performance robustness. Stochastic performance robustness measures based on classical design specifications are introduced, and the relationship between stochastic robustness measures and control system design parameters are discussed. The application of stochastic performance robustness, and the relationship between performance objectives and design parameters are demonstrated by means of example. The results prove stochastic robustness to be a good overall robustness analysis method that can relate robustness characteristics to control system design parameters.
Stochastic pump effect and geometric phases in dissipative and stochastic systems
Sinitsyn, Nikolai
2008-01-01
The success of Berry phases in quantum mechanics stimulated the study of similar phenomena in other areas of physics, including the theory of living cell locomotion and motion of patterns in nonlinear media. More recently, geometric phases have been applied to systems operating in a strongly stochastic environment, such as molecular motors. We discuss such geometric effects in purely classical dissipative stochastic systems and their role in the theory of the stochastic pump effect (SPE).
Monte Carlo simulations of intensity profiles for energetic particle propagation
NASA Astrophysics Data System (ADS)
Tautz, R. C.; Bolte, J.; Shalchi, A.
2016-02-01
Aims: Numerical test-particle simulations are a reliable and frequently used tool for testing analytical transport theories and predicting mean-free paths. The comparison between solutions of the diffusion equation and the particle flux is used to critically judge the applicability of diffusion to the stochastic transport of energetic particles in magnetized turbulence. Methods: A Monte Carlo simulation code is extended to allow for the generation of intensity profiles and anisotropy-time profiles. Because of the relatively low number density of computational particles, a kernel function has to be used to describe the spatial extent of each particle. Results: The obtained intensity profiles are interpreted as solutions of the diffusion equation by inserting the diffusion coefficients that have been directly determined from the mean-square displacements. The comparison shows that the time dependence of the diffusion coefficients needs to be considered, in particular the initial ballistic phase and the often subdiffusive perpendicular coefficient. Conclusions: It is argued that the perpendicular component of the distribution function is essential if agreement between the diffusion solution and the simulated flux is to be obtained. In addition, time-dependent diffusion can provide a better description than the classic diffusion equation only after the initial ballistic phase.
Monte Carlo role in radiobiological modelling of radiotherapy outcomes
NASA Astrophysics Data System (ADS)
El Naqa, Issam; Pater, Piotr; Seuntjens, Jan
2012-06-01
Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy.
Lattice Monte Carlo simulation of Galilei variant anomalous diffusion
Guo, Gang; Bittig, Arne; Uhrmacher, Adelinde
2015-05-01
The observation of an increasing number of anomalous diffusion phenomena motivates the study to reveal the actual reason for such stochastic processes. When it is difficult to get analytical solutions or necessary to track the trajectory of particles, lattice Monte Carlo (LMC) simulation has been shown to be particularly useful. To develop such an LMC simulation algorithm for the Galilei variant anomalous diffusion, we derive explicit solutions for the conditional and unconditional first passage time (FPT) distributions with double absorbing barriers. According to the theory of random walks on lattices and the FPT distributions, we propose an LMC simulation algorithm and prove that such LMC simulation can reproduce both the mean and the mean square displacement exactly in the long-time limit. However, the error introduced in the second moment of the displacement diverges according to a power law as the simulation time progresses. We give an explicit criterion for choosing a small enough lattice step to limit the error within the specified tolerance. We further validate the LMC simulation algorithm and confirm the theoretical error analysis through numerical simulations. The numerical results agree with our theoretical predictions very well.
NASA Astrophysics Data System (ADS)
Hertfelder, C.; Kümmerer, B.
2001-03-01
The mathematical model describing a light beam prepared in an arbitrary quantum optical state is a quasifree quantum stochastic process on the C* algebra of the canonical commutatation relations. For such quantum stochastic processes the concept of stochastic states is introduced. Stochastic quantum states have a classical analog in the following sense: If the light beam is prepared in a stochastic state, one can construct a generalized classical stochastic process, such that the distributions of the quantum observables and the classical random variables coincide. A sufficient algebraic condition for the stochasticity of a quantum state is formulated. The introduced formalism generalizes the Wigner representation from a single field mode to a continuum of modes. For the special case of a single field mode the stochasticity condition provides a new criterion for the positivity of the Wigner function related to the given state. As an example the quantized eletromagnetic field in empty space at temperature T=0 is discussed. It turns out that the corresponding classical stochastic process is not a white noise but a colored noise with a linearly increasing spectrum.
Liu, Meng; Wang, Ke; Wu, Qiong
2011-09-01
Stochastic competitive models with pollution and without pollution are proposed and studied. For the first system with pollution, sufficient criteria for extinction, nonpersistence in the mean, weak persistence in the mean, strong persistence in the mean, and stochastic permanence are established. The threshold between weak persistence in the mean and extinction for each population is obtained. It is found that stochastic disturbance is favorable for the survival of one species and is unfavorable for the survival of the other species. For the second system with pollution, sufficient conditions for extinction and weak persistence are obtained. For the model without pollution, a partial stochastic competitive exclusion principle is derived.
Stochastic differential equation model to Prendiville processes
NASA Astrophysics Data System (ADS)
Granita, Bahar, Arifah
2015-10-01
The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Stochastic differential equation model to Prendiville processes
Granita; Bahar, Arifah
2015-10-22
The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.
Stochasticity and determinism in models of hematopoiesis.
Kimmel, Marek
2014-01-01
This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.
Communication: Embedded fragment stochastic density functional theory
Neuhauser, Daniel; Baer, Roi; Rabani, Eran
2014-07-28
We develop a method in which the electronic densities of small fragments determined by Kohn-Sham density functional theory (DFT) are embedded using stochastic DFT to form the exact density of the full system. The new method preserves the scaling and the simplicity of the stochastic DFT but cures the slow convergence that occurs when weakly coupled subsystems are treated. It overcomes the spurious charge fluctuations that impair the applications of the original stochastic DFT approach. We demonstrate the new approach on a fullerene dimer and on clusters of water molecules and show that the density of states and the total energy can be accurately described with a relatively small number of stochastic orbitals.
Stochastic resonance during a polymer translocation process
NASA Astrophysics Data System (ADS)
Mondal, Debasish; Muthukumar, Murugappan
We study the translocation of a flexible polymer in a confined geometry subjected to a time-periodic external drive to explore stochastic resonance. We describe the equilibrium translocation process in terms of a Fokker-Planck description and use a discrete two-state model to describe the effect of the external driving force on the translocation dynamics. We observe that no stochastic resonance is possible if the associated free-energy barrier is purely entropic in nature. The polymer chain experiences a stochastic resonance effect only in presence of an energy threshold in terms of polymer-pore interaction. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly.
Extending stochastic network calculus to loss analysis.
Luo, Chao; Yu, Li; Zheng, Jun
2013-01-01
Loss is an important parameter of Quality of Service (QoS). Though stochastic network calculus is a very useful tool for performance evaluation of computer networks, existing studies on stochastic service guarantees mainly focused on the delay and backlog. Some efforts have been made to analyse loss by deterministic network calculus, but there are few results to extend stochastic network calculus for loss analysis. In this paper, we introduce a new parameter named loss factor into stochastic network calculus and then derive the loss bound through the existing arrival curve and service curve via this parameter. We then prove that our result is suitable for the networks with multiple input flows. Simulations show the impact of buffer size, arrival traffic, and service on the loss factor.
Stochastic structure formation in random media
NASA Astrophysics Data System (ADS)
Klyatskin, V. I.
2016-01-01
Stochastic structure formation in random media is considered using examples of elementary dynamical systems related to the two-dimensional geophysical fluid dynamics (Gaussian random fields) and to stochastically excited dynamical systems described by partial differential equations (lognormal random fields). In the latter case, spatial structures (clusters) may form with a probability of one in almost every system realization due to rare events happening with vanishing probability. Problems involving stochastic parametric excitation occur in fluid dynamics, magnetohydrodynamics, plasma physics, astrophysics, and radiophysics. A more complicated stochastic problem dealing with anomalous structures on the sea surface (rogue waves) is also considered, where the random Gaussian generation of sea surface roughness is accompanied by parametric excitation.