Variational Approach to Monte Carlo Renormalization Group
NASA Astrophysics Data System (ADS)
Wu, Yantao; Car, Roberto
2017-12-01
We present a Monte Carlo method for computing the renormalized coupling constants and the critical exponents within renormalization theory. The scheme, which derives from a variational principle, overcomes critical slowing down, by means of a bias potential that renders the coarse grained variables uncorrelated. The two-dimensional Ising model is used to illustrate the method.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Magnetic properties of checkerboard lattice: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.; Hamedoun, M.; Benyoussef, A.
2017-12-01
The magnetic properties of ferrimagnetic mixed-spin Ising model in the checkerboard lattice are studied using Monte Carlo simulations. The variation of total magnetization and magnetic susceptibility with the crystal field has been established. We have obtained a transition from an order to a disordered phase in some critical value of the physical variables. The reduced transition temperature is obtained for different exchange interactions. The magnetic hysteresis cycles have been established. The multiples hysteresis cycle in checkerboard lattice are obtained. The multiples hysteresis cycle have been established. The ferrimagnetic mixed-spin Ising model in checkerboard lattice is very interesting from the experimental point of view. The mixed spins system have many technological applications such as in domain opto-electronics, memory, nanomedicine and nano-biological systems. The obtained results show that that crystal field induce long-range spin-spin correlations even bellow the reduced transition temperature.
NASA Technical Reports Server (NTRS)
Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.
2014-01-01
This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle
NASA Technical Reports Server (NTRS)
Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.
2014-01-01
This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.
Ronald E. McRoberts; Veronica C. Lessard
2001-01-01
Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Solid-propellant rocket motor ballistic performance variation analyses
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.
1975-01-01
Results are presented of research aimed at improving the assessment of off-nominal internal ballistic performance including tailoff and thrust imbalance of two large solid-rocket motors (SRMs) firing in parallel. Previous analyses using the Monte Carlo technique were refined to permit evaluation of the effects of radial and circumferential propellant temperature gradients. Sample evaluations of the effect of the temperature gradients are presented. A separate theoretical investigation of the effect of strain rate on the burning rate of propellant indicates that the thermoelastic coupling may cause substantial variations in burning rate during highly transient operating conditions. The Monte Carlo approach was also modified to permit the effects on performance of variation in the characteristics between lots of propellants and other materials to be evaluated. This permits the variabilities for the total SRM population to be determined. A sample case shows, however, that the effect of these between-lot variations on thrust imbalances within pairs of SRMs is minor in compariosn to the effect of the within-lot variations. The revised Monte Carlo and design analysis computer programs along with instructions including format requirements for preparation of input data and illustrative examples are presented.
NASA Astrophysics Data System (ADS)
Fairbanks, Hillary R.; Doostan, Alireza; Ketelsen, Christian; Iaccarino, Gianluca
2017-07-01
Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
NASA Astrophysics Data System (ADS)
Umezawa, Naoto; Tsuneyuki, Shinji; Ohno, Takahisa; Shiraishi, Kenji; Chikyow, Toyohiro
2005-03-01
The transcorrelated (TC) method is a useful approach to optimize the Jastrow-Slater-type many-body wave function FD. The basic idea of the TC method [1] is based on the similarity transformation of a many-body Hamiltonian H with respect to the Jastrow factor F: HTC=frac1F H F in order to incorporate the correlation effect into HTC. Both the F and D are optimized by minimizing the variance ^2=|Hrm TCD - E D |^2 d^3N x. The optimization for F is implemented by the variational Monte Carlo calculation, and D is determined by the TC self-consistent-field equation for the one-body wave functions φμ(x), which is derived from the functional derivative of ^2 with respect to φmu(x). In this talk, we will present the results given by the transcorrelated variational Monte Carlo (TC-VMC) method for the ground state [2] and the excited states of atoms [3]. [1]S. F. Boys and N. C. Handy, Proc. Roy. Soc. A, 309, 209; 310, 43; 310, 63; 311, 309 (1969). [2]N. Umezawa and S. Tsuneyuki, J. Chem. Phys. 119, 10015 (2003). [3]N. Umezawa and S. Tsuneyuki, J. Chem. Phys. 121, 7070 (2004).
Lebel, Etienne P; Paunonen, Sampo V
2011-04-01
Implicit measures have contributed to important insights in almost every area of psychology. However, various issues and challenges remain concerning their use, one of which is their considerable variation in reliability, with many implicit measures having questionable reliability. The goal of the present investigation was to examine an overlooked consequence of this liability with respect to replication, when such implicit measures are used as dependent variables in experimental studies. Using a Monte Carlo simulation, the authors demonstrate that a higher level of unreliability in such dependent variables is associated with substantially lower levels of replicability. The results imply that this overlooked consequence can have far-reaching repercussions for the development of a cumulative science. The authors recommend the routine assessment and reporting of the reliability of implicit measures and also urge the improvement of implicit measures with low reliability.
Variability Analysis of MOS Differential Amplifier
NASA Astrophysics Data System (ADS)
Aoki, Masakazu; Seto, Kenji; Yamawaki, Taizo; Tanaka, Satoshi
Variation characteristics in MOS differential amplifier are evaluated by using the concise statistical model parameters for SPICE simulation. We find that the variation in the differential-mode gain, Adm, induced by the current factor variation, Δβ0, in the Id-variation of the differential MOS transistors is more than one order of magnitude larger than that induced by the threshold voltage variation, ΔVth, which has been regarded as a major factor for circuit variations in SoC's (2). The results obtained by the Monte Carlo simulations are verified by the theoretical analysis combined with the sensitivity analysis which clarifies the specific device parameter dependences of the variation in Adm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo
Zhao, Luning; Neuscamman, Eric
2017-05-17
We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less
Sanabria, Eduardo Alfredo; Quiroga, Lorena Beatriz; Martino, Adolfo Ludovico
2012-03-01
We studied the variation of thermal parameters of Odontophrynus occidentalis between season (wet and dry) in the Monte desert (Argentina). We measured body temperatures, microhabitat temperatures, and operative temperatures; while in the laboratory, we measured the selected body temperatures. Our results show a change in the thermal parameters of O. occidentalis that is related to environmental constraints of their thermal niche. Environmental thermal constraints are present in both seasons (dry and wet), showing variations in thermal parameters studied. Apparently imposed environmental restrictions, the toads in nature always show body temperatures below the set point. Acclimatization is an advantage for toads because it allows them to bring more frequent body temperatures to the set point. The selected body temperature has seasonal intraindividual variability. These variations can be due to thermo-sensitivity of toads and life histories of individuals that limits their allocation and acquisition of resources. Possibly the range of variation found in selected body temperature is a consequence of the thermal environmental variation along the year. These variations of thermal parameters are commonly found in deserts and thermal bodies of nocturnal ectotherms. The plasticity of selected body temperature allows O. occidentales to have longer periods of activity for foraging and reproduction, while maintaining reasonable high performance at different temperatures. The plasticity in seasonal variation of the thermal parameters has been poorly studied, and is greatly advantageous to desert species during changes in both seasonal and daily temperature, as these environments are known for their high environmental variability. © 2012 WILEY PERIODICALS, INC.
[Hydrologic variability and sensitivity based on Hurst coefficient and Bartels statistic].
Lei, Xu; Xie, Ping; Wu, Zi Yi; Sang, Yan Fang; Zhao, Jiang Yan; Li, Bin Bin
2018-04-01
Due to the global climate change and frequent human activities in recent years, the pure stochastic components of hydrological sequence is mixed with one or several of the variation ingredients, including jump, trend, period and dependency. It is urgently needed to clarify which indices should be used to quantify the degree of their variability. In this study, we defined the hydrological variability based on Hurst coefficient and Bartels statistic, and used Monte Carlo statistical tests to test and analyze their sensitivity to different variants. When the hydrological sequence had jump or trend variation, both Hurst coefficient and Bartels statistic could reflect the variation, with the Hurst coefficient being more sensitive to weak jump or trend variation. When the sequence had period, only the Bartels statistic could detect the mutation of the sequence. When the sequence had a dependency, both the Hurst coefficient and the Bartels statistics could reflect the variation, with the latter could detect weaker dependent variations. For the four variations, both the Hurst variability and Bartels variability increased with the increases of variation range. Thus, they could be used to measure the variation intensity of the hydrological sequence. We analyzed the temperature series of different weather stations in the Lancang River basin. Results showed that the temperature of all stations showed the upward trend or jump, indicating that the entire basin had experienced warming in recent years and the temperature variability in the upper and lower reaches was much higher. This case study showed the practicability of the proposed method.
Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo
2014-01-01
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Simulating variable source problems via post processing of individual particle tallies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability
Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.
2012-01-01
Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
NASA Astrophysics Data System (ADS)
Thomas, Yoann; Mazurié, Joseph; Alunno-Bruscia, Marianne; Bacher, Cédric; Bouget, Jean-François; Gohin, Francis; Pouvreau, Stéphane; Struski, Caroline
2011-11-01
In order to assess the potential of various marine ecosystems for shellfish aquaculture and to evaluate their carrying capacities, there is a need to clarify the response of exploited species to environmental variations using robust ecophysiological models and available environmental data. For a large range of applications and comparison purposes, a non-specific approach based on 'generic' individual growth models offers many advantages. In this context, we simulated the response of blue mussel ( Mytilus edulis L.) to the spatio-temporal fluctuations of the environment in Mont Saint-Michel Bay (North Brittany) by forcing a generic growth model based on Dynamic Energy Budgets with satellite-derived environmental data (i.e. temperature and food). After a calibration step based on data from mussel growth surveys, the model was applied over nine years on a large area covering the entire bay. These simulations provide an evaluation of the spatio-temporal variability in mussel growth and also show the ability of the DEB model to integrate satellite-derived data and to predict spatial and temporal growth variability of mussels. Observed seasonal, inter-annual and spatial growth variations are well simulated. The large-scale application highlights the strong link between food and mussel growth. The methodology described in this study may be considered as a suitable approach to account for environmental effects (food and temperature variations) on physiological responses (growth and reproduction) of filter feeders in varying environments. Such physiological responses may then be useful for evaluating the suitability of coastal ecosystems for shellfish aquaculture.
Ground-state calculations of confined hydrogen molecule H2 using variational Monte Carlo method
NASA Astrophysics Data System (ADS)
Doma, S. B.; El-Gammal, F. N.; Amer, A. A.
2018-07-01
The variational Monte Carlo method is used to evaluate the ground-state energy of a confined hydrogen molecule H2. Accordingly, we considered the.me case of hydrogen molecule confined by a hard prolate spheroidal cavity when the nuclear positions are clamped at the foci (on-focus case). Also, the case of off-focus nuclei in which the two nuclei are not clamped to the foci is studied. This case provides flexibility for the treatment of the molecular properties by selecting an arbitrary size and shape for the confining spheroidal box. A simple chemical analysis concerning the catalytic role of enzyme is investigated. An accurate trial wave function depending on many variational parameters is used for this purpose. The obtained results for the case of clamped foci exhibit good accuracy compared with the high precision variational data presented previously. In the case of off-focus nuclei, an improvement is obtained with respect to the most recent uncorrelated results existing in the literature.
Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overy, Catherine; Blunt, N. S.; Shepherd, James J.
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamicmore » itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.« less
Variational Monte Carlo study of pentaquark states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent latticemore » results. The short-range structure of the state is analyzed via its diquark content.« less
Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler
2016-01-01
This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.
NASA Astrophysics Data System (ADS)
Kamibayashi, Yuki; Miura, Shinichi
2016-08-01
In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorella, S., E-mail: sorella@sissa.it; Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wavemore » function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.« less
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Emmetropisation and the aetiology of refractive errors
Flitcroft, D I
2014-01-01
The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
Monte Carlo investigation of thrust imbalance of solid rocket motor pairs
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.
1976-01-01
The Monte Carlo method of statistical analysis is used to investigate the theoretical thrust imbalance of pairs of solid rocket motors (SRMs) firing in parallel. Sets of the significant variables are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs using a simplified, but comprehensive, model of the internal ballistics. The treatment of burning surface geometry allows for the variations in the ovality and alignment of the motor case and mandrel as well as those arising from differences in the basic size dimensions and propellant properties. The analysis is used to predict the thrust-time characteristics of 130 randomly selected pairs of Titan IIIC SRMs. A statistical comparison of the results with test data for 20 pairs shows the theory underpredicts the standard deviation in maximum thrust imbalance by 20% with variability in burning times matched within 2%. The range in thrust imbalance of Space Shuttle type SRM pairs is also estimated using applicable tolerances and variabilities and a correction factor based on the Titan IIIC analysis.
New insights into time series analysis. II - Non-correlated observations
NASA Astrophysics Data System (ADS)
Ferreira Lopes, C. E.; Cross, N. J. G.
2017-08-01
Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.
Constancy and asynchrony of Osmoderma eremita populations in tree hollows.
Ranius, Thomas
2001-01-01
A species rich beetle fauna is associated with old, hollow trees. Many of these species are regarded as endangered, but there is little understanding of the population structure and extinction risks of these species. In this study I show that one of the most endangered beetles, Osmoderma eremita, has a population structure which conforms to that of a metapopulation, with each tree possibly sustaining a local population. This was revealed by performing a mark-release-recapture experiment in 26 trees over a 5-year period. The spatial variability between trees was much greater than temporal variability between years. The population size was on average 11 adults tree -1 year -1 , but differed widely between trees (0-85 adults tree -1 year -1 ). The population size in each tree varied moderately between years [mean coefficient of variation (C.V.)=0.51], but more widely than from sampling errors alone (P=0.008, Monte Carlo simulation). The population size variability in all trees combined, however, was not larger than expected from sampling errors alone in a constant population (C.V.=0.15, P=0.335, Monte Carlo simulation). Thus, the fluctuations of local populations cancel each other out when they are added together. This pattern can arise only when the fluctuations occur asynchronously between trees. The asynchrony of the fluctuations justifies the assumption usually made in metapopulation modelling, that local populations within a metapopulation fluctuate independently of one another. The asynchrony might greatly increase persistence time at the metapopulation level (per stand), compared to the local population level (per tree). The total population size of O. eremita in the study area was estimated to be 3,900 individuals. Other localities sustaining O. eremita are smaller in area, and most of these must be enlarged to allow long-term metapopulation persistence and to satisfy genetic considerations of the O. eremita populations.
ERIC Educational Resources Information Center
Leite, Walter L.; Zuo, Youzhen
2011-01-01
Among the many methods currently available for estimating latent variable interactions, the unconstrained approach is attractive to applied researchers because of its relatively easy implementation with any structural equation modeling (SEM) software. Using a Monte Carlo simulation study, we extended and evaluated the unconstrained approach to…
The influence of synaptic size on AMPA receptor activation: a Monte Carlo model.
Montes, Jesus; Peña, Jose M; DeFelipe, Javier; Herreras, Oscar; Merchan-Perez, Angel
2015-01-01
Physiological and electron microscope studies have shown that synapses are functionally and morphologically heterogeneous and that variations in size of synaptic junctions are related to characteristics such as release probability and density of postsynaptic AMPA receptors. The present article focuses on how these morphological variations impact synaptic transmission. We based our study on Monte Carlo computational simulations of simplified model synapses whose morphological features have been extracted from hundreds of actual synaptic junctions reconstructed by three-dimensional electron microscopy. We have examined the effects that parameters such as synaptic size or density of AMPA receptors have on the number of receptors that open after release of a single synaptic vesicle. Our results indicate that the maximum number of receptors that will open after the release of a single synaptic vesicle may show a ten-fold variation in the whole population of synapses. When individual synapses are considered, there is also a stochastical variability that is maximal in small synapses with low numbers of receptors. The number of postsynaptic receptors and the size of the synaptic junction are the most influential parameters, while the packing density of receptors or the concentration of extrasynaptic transporters have little or no influence on the opening of AMPA receptors.
The Influence of Synaptic Size on AMPA Receptor Activation: A Monte Carlo Model
Montes, Jesus; Peña, Jose M.; DeFelipe, Javier; Herreras, Oscar; Merchan-Perez, Angel
2015-01-01
Physiological and electron microscope studies have shown that synapses are functionally and morphologically heterogeneous and that variations in size of synaptic junctions are related to characteristics such as release probability and density of postsynaptic AMPA receptors. The present article focuses on how these morphological variations impact synaptic transmission. We based our study on Monte Carlo computational simulations of simplified model synapses whose morphological features have been extracted from hundreds of actual synaptic junctions reconstructed by three-dimensional electron microscopy. We have examined the effects that parameters such as synaptic size or density of AMPA receptors have on the number of receptors that open after release of a single synaptic vesicle. Our results indicate that the maximum number of receptors that will open after the release of a single synaptic vesicle may show a ten-fold variation in the whole population of synapses. When individual synapses are considered, there is also a stochastical variability that is maximal in small synapses with low numbers of receptors. The number of postsynaptic receptors and the size of the synaptic junction are the most influential parameters, while the packing density of receptors or the concentration of extrasynaptic transporters have little or no influence on the opening of AMPA receptors. PMID:26107874
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
NASA Astrophysics Data System (ADS)
Lossa, Geoffrey; Deblecker, Olivier; Grève, Zacharie De
2018-05-01
In this work, we highlight the influence of the material uncertainties (magnetic permeability, electric conductivity of a Mn-Zn ferrite core, and electric permittivity of wire insulation) on the RLC parameters of a wound inductor extracted from the finite element method. To that end, the finite element method is embedded in a Monte Carlo simulation. We show that considering mentioned different material properties as real random variables, leads to significant variations in the distributions of the RLC parameters.
Rotation to a Partially Specified Target Matrix in Exploratory Factor Analysis: How Many Targets?
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2013-01-01
The purpose of this study was to explore the influence of the number of targets specified on the quality of exploratory factor analysis solutions with a complex underlying structure and incomplete substantive measurement theory. Three Monte Carlo studies were performed based on the ratio of the number of observed variables to the number of…
A Variational Monte Carlo Approach to Atomic Structure
ERIC Educational Resources Information Center
Davis, Stephen L.
2007-01-01
The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.
Allen, Bruce C; Hack, C Eric; Clewell, Harvey J
2007-08-01
A Bayesian approach, implemented using Markov Chain Monte Carlo (MCMC) analysis, was applied with a physiologically-based pharmacokinetic (PBPK) model of methylmercury (MeHg) to evaluate the variability of MeHg exposure in women of childbearing age in the U.S. population. The analysis made use of the newly available National Health and Nutrition Survey (NHANES) blood and hair mercury concentration data for women of age 16-49 years (sample size, 1,582). Bayesian analysis was performed to estimate the population variability in MeHg exposure (daily ingestion rate) implied by the variation in blood and hair concentrations of mercury in the NHANES database. The measured variability in the NHANES blood and hair data represents the result of a process that includes interindividual variation in exposure to MeHg and interindividual variation in the pharmacokinetics (distribution, clearance) of MeHg. The PBPK model includes a number of pharmacokinetic parameters (e.g., tissue volumes, partition coefficients, rate constants for metabolism and elimination) that can vary from individual to individual within the subpopulation of interest. Using MCMC analysis, it was possible to combine prior distributions of the PBPK model parameters with the NHANES blood and hair data, as well as with kinetic data from controlled human exposures to MeHg, to derive posterior distributions that refine the estimates of both the population exposure distribution and the pharmacokinetic parameters. In general, based on the populations surveyed by NHANES, the results of the MCMC analysis indicate that a small fraction, less than 1%, of the U.S. population of women of childbearing age may have mercury exposures greater than the EPA RfD for MeHg of 0.1 microg/kg/day, and that there are few, if any, exposures greater than the ATSDR MRL of 0.3 microg/kg/day. The analysis also indicates that typical exposures may be greater than previously estimated from food consumption surveys, but that the variability in exposure within the population of U.S. women of childbearing age may be less than previously assumed.
Climate variation explains a third of global crop yield variability
Ray, Deepak K.; Gerber, James S.; MacDonald, Graham K.; West, Paul C.
2015-01-01
Many studies have examined the role of mean climate change in agriculture, but an understanding of the influence of inter-annual climate variations on crop yields in different regions remains elusive. We use detailed crop statistics time series for ~13,500 political units to examine how recent climate variability led to variations in maize, rice, wheat and soybean crop yields worldwide. While some areas show no significant influence of climate variability, in substantial areas of the global breadbaskets, >60% of the yield variability can be explained by climate variability. Globally, climate variability accounts for roughly a third (~32–39%) of the observed yield variability. Our study uniquely illustrates spatial patterns in the relationship between climate variability and crop yield variability, highlighting where variations in temperature, precipitation or their interaction explain yield variability. We discuss key drivers for the observed variations to target further research and policy interventions geared towards buffering future crop production from climate variability. PMID:25609225
Auxiliary-field quantum Monte Carlo simulations of neutron matter in chiral effective field theory.
Wlazłowski, G; Holt, J W; Moroz, S; Bulgac, A; Roche, K J
2014-10-31
We present variational Monte Carlo calculations of the neutron matter equation of state using chiral nuclear forces. The ground-state wave function of neutron matter, containing nonperturbative many-body correlations, is obtained from auxiliary-field quantum Monte Carlo simulations of up to about 340 neutrons interacting on a 10(3) discretized lattice. The evolution Hamiltonian is chosen to be attractive and spin independent in order to avoid the fermion sign problem and is constructed to best reproduce broad features of the chiral nuclear force. This is facilitated by choosing a lattice spacing of 1.5 fm, corresponding to a momentum-space cutoff of Λ=414 MeV/c, a resolution scale at which strongly repulsive features of nuclear two-body forces are suppressed. Differences between the evolution potential and the full chiral nuclear interaction (Entem and Machleidt Λ=414 MeV [L. Coraggio et al., Phys. Rev. C 87, 014322 (2013).
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Probabilistic Fiber Composite Micromechanics
NASA Technical Reports Server (NTRS)
Stock, Thomas A.
1996-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. The variables in which uncertainties are accounted for include constituent and void volume ratios, constituent elastic properties and strengths, and fiber misalignment. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material property variations induced by random changes expected at the material micro level. Regression results are presented to show the relative correlation between predictor and response variables in the study. These computational procedures make possible a formal description of anticipated random processes at the intra-ply level, and the related effects of these on composite properties.
Rossi, Marco; Stockman, Gert-Jan; Rogier, Hendrik; Vande Ginste, Dries
2016-01-01
The efficiency of a wireless power transfer (WPT) system in the radiative near-field is inevitably affected by the variability in the design parameters of the deployed antennas and by uncertainties in their mutual position. Therefore, we propose a stochastic analysis that combines the generalized polynomial chaos (gPC) theory with an efficient model for the interaction between devices in the radiative near-field. This framework enables us to investigate the impact of random effects on the power transfer efficiency (PTE) of a WPT system. More specifically, the WPT system under study consists of a transmitting horn antenna and a receiving textile antenna operating in the Industrial, Scientific and Medical (ISM) band at 2.45 GHz. First, we model the impact of the textile antenna’s variability on the WPT system. Next, we include the position uncertainties of the antennas in the analysis in order to quantify the overall variations in the PTE. The analysis is carried out by means of polynomial-chaos-based macromodels, whereas a Monte Carlo simulation validates the complete technique. It is shown that the proposed approach is very accurate, more flexible and more efficient than a straightforward Monte Carlo analysis, with demonstrated speedup factors up to 2500. PMID:27447632
Rossi, Marco; Stockman, Gert-Jan; Rogier, Hendrik; Vande Ginste, Dries
2016-07-19
The efficiency of a wireless power transfer (WPT) system in the radiative near-field is inevitably affected by the variability in the design parameters of the deployed antennas and by uncertainties in their mutual position. Therefore, we propose a stochastic analysis that combines the generalized polynomial chaos (gPC) theory with an efficient model for the interaction between devices in the radiative near-field. This framework enables us to investigate the impact of random effects on the power transfer efficiency (PTE) of a WPT system. More specifically, the WPT system under study consists of a transmitting horn antenna and a receiving textile antenna operating in the Industrial, Scientific and Medical (ISM) band at 2.45 GHz. First, we model the impact of the textile antenna's variability on the WPT system. Next, we include the position uncertainties of the antennas in the analysis in order to quantify the overall variations in the PTE. The analysis is carried out by means of polynomial-chaos-based macromodels, whereas a Monte Carlo simulation validates the complete technique. It is shown that the proposed approach is very accurate, more flexible and more efficient than a straightforward Monte Carlo analysis, with demonstrated speedup factors up to 2500.
NASA Astrophysics Data System (ADS)
Yu, Huiling; Liang, Hao; Lin, Xue; Zhang, Yizhuo
2018-04-01
A nondestructive methodology is proposed to determine the modulus of elasticity (MOE) of Fraxinus mandschurica samples by using near-infrared (NIR) spectroscopy. The test data consisted of 150 NIR absorption spectra of the wood samples obtained using an NIR spectrometer, with the wavelength range of 900 to 1900 nm. To eliminate the high-frequency noise and the systematic variations on the baseline, Savitzky-Golay convolution combined with standard normal variate and detrending transformation was applied as data pretreated methods. The uninformative variable elimination (UVE), improved by the evolutionary Monte Carlo (EMC) algorithm and successive projections algorithm (SPA) selected three characteristic variables from full 117 variables. The predictive ability of the models was evaluated concerning the root-mean-square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set. In comparison with the predicted results of all the models established in the experiments, UVE-EMC-SPA-LS-SVM presented the best results with the smallest RMSEP of 0.652 and the highest Rp2 of 0.887. Thus, it is feasible to determine the MOE of F. mandschurica using NIR spectroscopy accurately.
Event-Based control of depth of hypnosis in anesthesia.
Merigo, Luca; Beschi, Manuel; Padula, Fabrizio; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio
2017-08-01
In this paper, we propose the use of an event-based control strategy for the closed-loop control of the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. A new event generator with high noise-filtering properties is employed in addition to a PIDPlus controller. The tuning of the parameters is performed off-line by using genetic algorithms by considering a given data set of patients. The effectiveness and robustness of the method is verified in simulation by implementing a Monte Carlo method to address the intra-patient and inter-patient variability. A comparison with a standard PID control structure shows that the event-based control system achieves a reduction of the total variation of the manipulated variable of 93% in the induction phase and of 95% in the maintenance phase. The use of event based automatic control in anesthesia yields a fast induction phase with bounded overshoot and an acceptable disturbance rejection. A comparison with a standard PID control structure shows that the technique effectively mimics the behavior of the anesthesiologist by providing a significant decrement of the total variation of the manipulated variable. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cranmer, S. R.; Woolsey, L. N.
2013-12-01
Despite many years of study, the basic physical processes responsible for producing the solar wind are not known -- or at least not universally agreed upon. There are many proposed solutions to the intertwined problems of coronal heating, wind acceleration, and particle energization, and the real problem is how to choose between them. Confirming that any one proposed mechanism is acting in the heliosphere is difficult, not only because measurements are limited, but also because many of the suggested processes act on a huge range of spatial scales (from centimeters to astronomical units) with complex feedback effects that are not yet understood. This presentation will attempt to summarize the outstanding questions in our understanding of the gradual energization of protons, electrons, and heavy ions in the solar wind. The focus will be on the collisionless dissipation of turbulent fluctuations that originate at the solar surface, and how the Turbulent Dissipation Challenge can help identify the dominant physical processes that transfer energy to the particles. We will also discuss the importance of making the best use of in-situ and remote-sensing measurements that probe the highly variable corona and heliosphere. There is key information in the variability that sometimes gets ignored when theorists attempt to model the properties of well-known "mean states" (i.e., fast wind streams from polar coronal holes). Instead, it could be a more convincing test for models to reproduce the full statistical ensemble of plasma/field states observed at a given place in the heliosphere. As an example of this, we will present preliminary results of a Monte Carlo model that aims to reproduce the full distribution of variations in the proton temperature anisotropy and plasma beta measured in the solar wind at 1 AU.
Efficiency versus bias: the role of distributional parameters in count contingent behaviour models
Joseph Englin; Arwin Pang; Thomas Holmes
2011-01-01
One of the challenges facing many applications of non-market valuations is to find data with enough variation in the variable(s) of interest to estimate econometrically their effects on the quantity demanded. A solution to this problem was the introduction of stated preference surveys. These surveys can introduce variation into variables where there is no natural...
NASA Astrophysics Data System (ADS)
Carpenter, Matthew H.; Jernigan, J. G.
2007-05-01
We present examples of an analysis progression consisting of a synthesis of the Photon Clean Method (Carpenter, Jernigan, Brown, Beiersdorfer 2007) and bootstrap methods to quantify errors and variations in many-parameter models. The Photon Clean Method (PCM) works well for model spaces with large numbers of parameters proportional to the number of photons, therefore a Monte Carlo paradigm is a natural numerical approach. Consequently, PCM, an "inverse Monte-Carlo" method, requires a new approach for quantifying errors as compared to common analysis methods for fitting models of low dimensionality. This presentation will explore the methodology and presentation of analysis results derived from a variety of public data sets, including observations with XMM-Newton, Chandra, and other NASA missions. Special attention is given to the visualization of both data and models including dynamic interactive presentations. This work was performed under the auspices of the Department of Energy under contract No. W-7405-Eng-48. We thank Peter Beiersdorfer and Greg Brown for their support of this technical portion of a larger program related to science with the LLNL EBIT program.
Time dependent variation of carrying capacity of prestressed precast beam
NASA Astrophysics Data System (ADS)
Le, Tuan D.; Konečný, Petr; Matečková, Pavlína
2018-04-01
The article deals with the evaluation of the precast concrete element time dependent carrying capacity. The variation of the resistance is inherited property of laboratory as well as in-situ members. Thus the specification of highest, yet possible, laboratory sample resistance is important with respect to evaluation of laboratory experiments based on the test machine loading capabilities. The ultimate capacity is evaluated through the bending moment resistance of a simply supported prestressed concrete beam. The probabilistic assessment is applied. Scatter of random variables of compressive strength of concrete and effective height of the cross section is considered. Monte Carlo simulation technique is used to investigate the performance of the cross section of the beam with changes of tendons’ positions and compressive strength of concrete.
Slob, Wout
2006-07-01
Probabilistic dietary exposure assessments that are fully based on Monte Carlo sampling from the raw intake data may not be appropriate. This paper shows that the data should first be analysed by using a statistical model that is able to take the various dimensions of food consumption patterns into account. A (parametric) model is discussed that takes into account the interindividual variation in (daily) consumption frequencies, as well as in amounts consumed. Further, the model can be used to include covariates, such as age, sex, or other individual attributes. Some illustrative examples show how this model may be used to estimate the probability of exceeding an (acute or chronic) exposure limit. These results are compared with the results based on directly counting the fraction of observed intakes exceeding the limit value. This comparison shows that the latter method is not adequate, in particular for the acute exposure situation. A two-step approach for probabilistic (acute) exposure assessment is proposed: first analyse the consumption data by a (parametric) statistical model as discussed in this paper, and then use Monte Carlo techniques for combining the variation in concentrations with the variation in consumption (by sampling from the statistical model). This approach results in an estimate of the fraction of the population as a function of the fraction of days at which the exposure limit is exceeded by the individual.
Efficient inference for genetic association studies with multiple outcomes.
Ruffieux, Helene; Davison, Anthony C; Hager, Jorg; Irincheeva, Irina
2017-10-01
Combined inference for heterogeneous high-dimensional data is critical in modern biology, where clinical and various kinds of molecular data may be available from a single study. Classical genetic association studies regress a single clinical outcome on many genetic variants one by one, but there is an increasing demand for joint analysis of many molecular outcomes and genetic variants in order to unravel functional interactions. Unfortunately, most existing approaches to joint modeling are either too simplistic to be powerful or are impracticable for computational reasons. Inspired by Richardson and others (2010, Bayesian Statistics 9), we consider a sparse multivariate regression model that allows simultaneous selection of predictors and associated responses. As Markov chain Monte Carlo (MCMC) inference on such models can be prohibitively slow when the number of genetic variants exceeds a few thousand, we propose a variational inference approach which produces posterior information very close to that of MCMC inference, at a much reduced computational cost. Extensive numerical experiments show that our approach outperforms popular variable selection methods and tailored Bayesian procedures, dealing within hours with problems involving hundreds of thousands of genetic variants and tens to hundreds of clinical or molecular outcomes. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Estimating Divergence Parameters With Small Samples From a Large Number of Loci
Wang, Yong; Hey, Jody
2010-01-01
Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765
Yang, Yi Isaac; Parrinello, Michele
2018-06-12
Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Luning; Neuscamman, Eric
We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less
Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.
2008-01-01
Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.
Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.
2012-01-01
Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.
Quantum Monte Carlo calculations of weak transitions in A = 6 – 10 nuclei
Pastore, S.; Baroni, A.; Carlson, J.; ...
2018-02-26
{\\it Ab initio} calculations of the Gamow-Teller (GT) matrix elements in themore » $$\\beta$$ decays of $^6$He and $$^{10}$$C and electron captures in $^7$Be are carried out using both variational and Green's function Monte Carlo wave functions obtained from the Argonne $$v_{18}$$ two-nucleon and Illinois-7 three-nucleon interactions, and axial many-body currents derived from either meson-exchange phenomenology or chiral effective field theory. The agreement with experimental data is excellent for the electron captures in $^7$Be, while theory overestimates the $^6$He and $$^{10}$$C data by $$\\sim 2\\%$$ and $$\\sim 10\\%$$, respectively. We show that for these systems correlations in the nuclear wave functions are crucial to explain the data, while many-body currents increase by $$\\sim 2$$--$$3\\%$$ the one-body GT contributions. These findings suggest that the longstanding $$g_A$$-problem, {\\it i.e.}, the systematic overprediction ($$\\sim 20 \\%$$ in $$A\\le 18$$ nuclei) of GT matrix elements in shell-model calculations, may be resolved, at least partially, by correlation effects.« less
A variational Monte Carlo study of different spin configurations of electron-hole bilayer
NASA Astrophysics Data System (ADS)
Sharma, Rajesh O.; Saini, L. K.; Bahuguna, Bhagwati Prasad
2018-05-01
We report quantum Monte Carlo results for mass-asymmetric electron-hole bilayer (EHBL) system with different-different spin configurations. Particularly, we apply a variational Monte Carlo method to estimate the ground-state energy, condensate fraction and pair-correlations function at fixed density rs = 5 and interlayer distance d = 1 a.u. We find that spin-configuration of EHBL system, which consists of only up-electrons in one layer and down-holes in other i.e. ferromagnetic arrangement within layers and anti-ferromagnetic across the layers, is more stable than the other spin-configurations considered in this study.
Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; Shah, Ashwin R.
1997-01-01
The properties of ceramic matrix composites (CMC's) are known to display a considerable amount of scatter due to variations in fiber/matrix properties, interphase properties, interphase bonding, amount of matrix voids, and many geometry- or fabrication-related parameters, such as ply thickness and ply orientation. This paper summarizes preliminary studies in which formal probabilistic descriptions of the material-behavior- and fabrication-related parameters were incorporated into micromechanics and macromechanics for CMC'S. In this process two existing methodologies, namely CMC micromechanics and macromechanics analysis and a fast probability integration (FPI) technique are synergistically coupled to obtain the probabilistic composite behavior or response. Preliminary results in the form of cumulative probability distributions and information on the probability sensitivities of the response to primitive variables for a unidirectional silicon carbide/reaction-bonded silicon nitride (SiC/RBSN) CMC are presented. The cumulative distribution functions are computed for composite moduli, thermal expansion coefficients, thermal conductivities, and longitudinal tensile strength at room temperature. The variations in the constituent properties that directly affect these composite properties are accounted for via assumed probabilistic distributions. Collectively, the results show that the present technique provides valuable information about the composite properties and sensitivity factors, which is useful to design or test engineers. Furthermore, the present methodology is computationally more efficient than a standard Monte-Carlo simulation technique; and the agreement between the two solutions is excellent, as shown via select examples.
Zhang, Di; Savandi, Ali S.; Demarco, John J.; Cagnon, Chris H.; Angel, Erin; Turner, Adam C.; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-01-01
The larger coverage afforded by wider z-axis beams in multidetector CT (MDCT) creates larger cone angles and greater beam divergence, which results in substantial surface dose variation for helical and contiguous axial scans. This study evaluates the variation of absorbed radiation dose in both cylindrical and anthropomorphic phantoms when performing helical or contiguous axial scans. The approach used here was to perform Monte Carlo simulations of a 64 slice MDCT. Simulations were performed with different radiation profiles (simulated beam widths) for a given collimation setting (nominal beam width) and for different pitch values and tube start angles. The magnitude of variation at the surface was evaluated under four different conditions: (a) a homogeneous CTDI phantom with different combinations of pitch and simulated beam widths, (b) a heterogeneous anthropomorphic phantom with one measured beam collimation and various pitch values, (c) a homogeneous CTDI phantom with fixed beam collimation and pitch, but with different tube start angles, and (d) pitch values that should minimize variations of surface dose—evaluated for both homogeneous and heterogeneous phantoms. For the CTDI phantom simulations, peripheral dose patterns showed variation with percent ripple as high as 65% when pitch is 1.5 and simulated beam width is equal to the nominal collimation. For the anterior surface dose on an anthropomorphic phantom, the percent ripple was as high as 40% when the pitch is 1.5 and simulated beam width is equal to the measured beam width. Low pitch values were shown to cause beam overlaps which created new peaks. Different x-ray tube start angles create shifts of the peripheral dose profiles. The start angle simulations showed that for a given table position, the surface dose could vary dramatically with minimum values that were 40% of the peak when all conditions are held constant except for the start angle. The last group of simulations showed that an “ideal” pitch value can be determined which reduces surface dose variations, but this pitch value must take into account the measured beam width. These results reveal the complexity of estimating surface dose and demonstrate a range of dose variability at surface positions for both homogeneous cylindrical and heterogeneous anthropomorphic phantoms. These findings have potential implications for small-sized dosimeter measurements in phantoms, such as with TLDs or small Farmer chambers. PMID:19378763
Variability of multilevel switching in scaled hybrid RS/CMOS nanoelectronic circuits: theory
NASA Astrophysics Data System (ADS)
Heittmann, Arne; Noll, Tobias G.
2013-07-01
A theory is presented which describes the variability of multilevel switching in scaled hybrid resistive-switching/CMOS nanoelectronic circuits. Variability is quantified in terms of conductance variation using the first two moments derived from the probability density function (PDF) of the RS conductance. For RS, which are based on the electrochemical metallization effect (ECM), this variability is - to some extent - caused by discrete events such as electrochemical reactions, which occur on atomic scale and are at random. The theory shows that the conductance variation depends on the joint interaction between the programming circuit and the resistive switch (RS), and explicitly quantifies the impact of RS device parameters and parameters of the programming circuit on the conductance variance. Using a current mirror as an exemplary programming circuit an upper limit of 2-4 bits (dependent on the filament surface area) is estimated as the storage capacity exploiting the multilevel capabilities of an ECM cell. The theoretical results were verified by Monte Carlo circuit simulations on a standard circuit simulation environment using an ECM device model which models the filament growth by a Poisson process. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
Max-Moerbeck, W.; Hovatta, T.; Richards, J. L.; ...
2014-09-22
In order to determine the location of the gamma-ray emission site in blazars, we investigate the time-domain relationship between their radio and gamma-ray emission. Light-curves for the brightest detected blazars from the first 3 years of the mission of the Fermi Gamma-ray Space Telescope are cross-correlated with 4 years of 15GHz observations from the OVRO 40-m monitoring program. The large sample and long light-curve duration enable us to carry out a statistically robust analysis of the significance of the cross-correlations, which is investigated using Monte Carlo simulations including the uneven sampling and noise properties of the light-curves. Modeling the light-curvesmore » as red noise processes with power-law power spectral densities, we find that only one of 41 sources with high quality data in both bands shows correlations with significance larger than 3σ (AO0235+164), with only two more larger than even 2.25σ (PKS 1502+106 and B2 2308+34). Additionally, we find correlated variability in Mrk 421 when including a strong flare that occurred in July-September 2012. These results demonstrate very clearly the difficulty of measuring statistically robust multiwavelength correlations and the care needed when comparing light-curves even when many years of data are used. This should be a caution. In all four sources the radio variations lag the gamma-ray variations, suggesting that the gamma-ray emission originates upstream of the radio emission. Continuous simultaneous monitoring over a longer time period is required to obtain high significance levels in cross-correlations between gamma-ray and radio variability in most blazars.« less
1984-07-01
piecewise constant energy dependence. This is a seven-dimensional problem with time dependence, three spatial and two angular or directional variables and...in extending the computer implementation of the method to time and energy dependent problems, and to solving and validating this technique on a...problems they have severe limitations. The Monte Carlo method, usually requires the use of many hours of expensive computer time , and for deep
Hoogerheide, E S S; Azevedo Filho, J A; Vencovsky, R; Zucchi, M I; Zago, B W; Pinheiro, J B
2017-05-31
The cultivated garlic (Allium sativum L.) displays a wide phenotypic diversity, which is derived from natural mutations and phenotypic plasticity, due to dependence on soil type, moisture, latitude, altitude and cultural practices, leading to a large number of cultivars. This study aimed to evaluate the genetic variability shown by 63 garlic accessions belonging to Instituto Agronômico de Campinas and the Escola Superior de Agricultura "Luiz de Queiroz" germplasm collections. We evaluated ten quantitative characters in experimental trials conducted under two localities of the State of São Paulo: Monte Alegre do Sul and Piracicaba, during the agricultural year of 2007, in a randomized blocks design with five replications. The Mahalanobis distance was used to measure genetic dissimilarities. The UPGMA method and Tocher's method were used as clustering procedures. Results indicated significant variation among accessions (P < 0.01) for all evaluated characters, except for the percentage of secondary bulb growth in MAS, indicating the existence of genetic variation for bulb production, and germplasm evaluation considering different environments is more reliable for the characterization of the genotypic variability among garlic accessions, since it diminishes the environmental effects in the clustering of genotypes.
Modeling intersubject variability of bronchial doses for inhaled radon progeny.
Hofmann, Werner; Winkler-Heil, Renate; Hussain, Majid
2010-10-01
The main sources of intersubject variations considered in the present study were: (1) size and structure of nasal and oral passages, affecting extrathoracic deposition and, in further consequence, the fraction of the inhaled activity reaching the bronchial region; (2) size and asymmetric branching of the human bronchial airway system, leading to variations of diameters, lengths, branching angles, etc.; (3) respiratory parameters, such as tidal volume, and breathing frequency; (4) mucociliary clearance rates; and (5) thickness of the bronchial epithelium and depth of target cells, related to airway diameters. For the calculation of deposition fractions, retained surface activities, and bronchial doses, parameter values were randomly selected from their corresponding probability density functions, derived from experimental data, by applying Monte Carlo methods. Bronchial doses, expressed in mGy WLM-1, were computed for specific mining conditions, i.e., for defined size distributions, unattached fractions, and physical activities. Resulting bronchial dose distributions could be approximated by lognormal distributions. Geometric standard deviations illustrating intersubject variations ranged from about 2 in the trachea to about 7 in peripheral bronchiolar airways. The major sources of the intersubject variability of bronchial doses for inhaled radon progeny are the asymmetry and variability of the linear airway dimensions, the filtering efficiency of the nasal passages, and the thickness of the bronchial epithelium, while fluctuations of the respiratory parameters and mucociliary clearance rates seem to compensate each other.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Monte Carlo Models to Constrain Temperature Variation in the Lowermost Mantle
NASA Astrophysics Data System (ADS)
Nowacki, A.; Walker, A.; Davies, C. J.
2017-12-01
The three dimensional temperature variation in the lowermost mantle is diagnostic of the pattern of mantle convection and controls the extraction of heat from the outer core. Direct measurement of mantle temperature is impossible and the temperature in the lowermost mantle is poorly constrained. However, since temperature variations indirectly impact many geophysical observables, it is possible to isolate the thermal signal if mantle composition and the physical properties of mantle minerals are known. Here we describe a scheme that allows seismic, geodynamic, and thermal properties of the core and mantle to be calculated given an assumed temperature (T) and mineralogical (X) distribution in the mantle while making use of a self consistent parameterisation of the thermoelastic properties of mantle minerals. For a given T and X, this scheme allows us to determine the misfit between our model and observations for the long-wavelength surface geoid, core-mantle boundary topography, inner-core radius, total surface heat-flux and p- and s-wave tomography. The comparison is quick, taking much less than a second, and can accommodate uncertainty in the mineralogical parameterisation. This makes the scheme well-suited to use in a Monte Carlo approach to the determination of the long-wavelength temperature and composition of the lowermost mantle. We present some initial results from our model, which include the robust generation of a thermal boundary layer in the one-dimensional thermal structure.
Run-up Variability due to Source Effects
NASA Astrophysics Data System (ADS)
Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.
2010-05-01
This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.
A suite of global, cross-scale topographic variables for environmental and biodiversity modeling
NASA Astrophysics Data System (ADS)
Amatulli, Giuseppe; Domisch, Sami; Tuanmu, Mao-Ning; Parmentier, Benoit; Ranipeta, Ajay; Malczyk, Jeremy; Jetz, Walter
2018-03-01
Topographic variation underpins a myriad of patterns and processes in hydrology, climatology, geography and ecology and is key to understanding the variation of life on the planet. A fully standardized and global multivariate product of different terrain features has the potential to support many large-scale research applications, however to date, such datasets are unavailable. Here we used the digital elevation model products of global 250 m GMTED2010 and near-global 90 m SRTM4.1dev to derive a suite of topographic variables: elevation, slope, aspect, eastness, northness, roughness, terrain roughness index, topographic position index, vector ruggedness measure, profile/tangential curvature, first/second order partial derivative, and 10 geomorphological landform classes. We aggregated each variable to 1, 5, 10, 50 and 100 km spatial grains using several aggregation approaches. While a cross-correlation underlines the high similarity of many variables, a more detailed view in four mountain regions reveals local differences, as well as scale variations in the aggregated variables at different spatial grains. All newly-developed variables are available for download at Data Citation 1 and for download and visualization at http://www.earthenv.org/topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric
In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less
Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric
2017-10-28
In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less
Angle-of-Attack-Modulated Terminal Point Control for Neptune Aerocapture
NASA Technical Reports Server (NTRS)
Queen, Eric M.
2004-01-01
An aerocapture guidance algorithm based on a calculus of variations approach is developed, using angle of attack as the primary control variable. Bank angle is used as a secondary control to alleviate angle of attack extremes and to control inclination. The guidance equations are derived in detail. The controller has very small onboard computational requirements and is robust to atmospheric and aerodynamic dispersions. The algorithm is applied to aerocapture at Neptune. Three versions of the controller are considered with varying angle of attack authority. The three versions of the controller are evaluated using Monte Carlo simulations with expected dispersions.
NASA Astrophysics Data System (ADS)
Hu, Zhaoying; Tulevski, George S.; Hannon, James B.; Afzali, Ali; Liehr, Michael; Park, Hongsik
2015-06-01
Carbon nanotubes (CNTs) have been widely studied as a channel material of scaled transistors for high-speed and low-power logic applications. In order to have sufficient drive current, it is widely assumed that CNT-based logic devices will have multiple CNTs in each channel. Understanding the effects of the number of CNTs on device performance can aid in the design of CNT field-effect transistors (CNTFETs). We have fabricated multi-CNT-channel CNTFETs with an 80-nm channel length using precise self-assembly methods. We describe compact statistical models and Monte Carlo simulations to analyze failure probability and the variability of the on-state current and threshold voltage. The results show that multichannel CNTFETs are more resilient to process variation and random environmental fluctuations than single-CNT devices.
A probabilistic model of a porous heat exchanger
NASA Technical Reports Server (NTRS)
Agrawal, O. P.; Lin, X. A.
1995-01-01
This paper presents a probabilistic one-dimensional finite element model for heat transfer processes in porous heat exchangers. The Galerkin approach is used to develop the finite element matrices. Some of the submatrices are asymmetric due to the presence of the flow term. The Neumann expansion is used to write the temperature distribution as a series of random variables, and the expectation operator is applied to obtain the mean and deviation statistics. To demonstrate the feasibility of the formulation, a one-dimensional model of heat transfer phenomenon in superfluid flow through a porous media is considered. Results of this formulation agree well with the Monte-Carlo simulations and the analytical solutions. Although the numerical experiments are confined to parametric random variables, a formulation is presented to account for the random spatial variations.
Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.
Monte Carlo modeling the phase diagram of magnets with the Dzyaloshinskii - Moriya interaction
NASA Astrophysics Data System (ADS)
Belemuk, A. M.; Stishov, S. M.
2017-11-01
We use classical Monte Carlo calculations to model the high-pressure behavior of the phase transition in the helical magnets. We vary values of the exchange interaction constant J and the Dzyaloshinskii-Moriya interaction constant D, which is equivalent to changing spin-spin distances, as occurs in real systems under pressure. The system under study is self-similar at D / J = constant , and its properties are defined by the single variable J / T , where T is temperature. The existence of the first order phase transition critically depends on the ratio D / J . A variation of J strongly affects the phase transition temperature and width of the fluctuation region (the ;hump;) as follows from the system self-similarity. The high-pressure behavior of the spin system depends on the evolution of the interaction constants J and D on compression. Our calculations are relevant to the high pressure phase diagrams of helical magnets MnSi and Cu2OSeO3.
Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah
2015-01-01
Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pérez-López, Paula; Montazeri, Mahdokht; Feijoo, Gumersindo; Moreira, María Teresa; Eckelman, Matthew J
2018-06-01
The economic and environmental performance of microalgal processes has been widely analyzed in recent years. However, few studies propose an integrated process-based approach to evaluate economic and environmental indicators simultaneously. Biodiesel is usually the single product and the effect of environmental benefits of co-products obtained in the process is rarely discussed. In addition, there is wide variation of the results due to inherent variability of some parameters as well as different assumptions in the models and limited knowledge about the processes. In this study, two standardized models were combined to provide an integrated simulation tool allowing the simultaneous estimation of economic and environmental indicators from a unique set of input parameters. First, a harmonized scenario was assessed to validate the joint environmental and techno-economic model. The findings were consistent with previous assessments. In a second stage, a Monte Carlo simulation was applied to evaluate the influence of variable and uncertain parameters in the model output, as well as the correlations between the different outputs. The simulation showed a high probability of achieving favorable environmental performance for the evaluated categories and a minimum selling price ranging from $11gal -1 to $106gal -1 . Greenhouse gas emissions and minimum selling price were found to have the strongest positive linear relationship, whereas eutrophication showed weak correlations with the other indicators (namely greenhouse gas emissions, cumulative energy demand and minimum selling price). Process parameters (especially biomass productivity and lipid content) were the main source of variation, whereas uncertainties linked to the characterization methods and economic parameters had limited effect on the results. Copyright © 2018 Elsevier B.V. All rights reserved.
Charlesworth, Brian; Charlesworth, Deborah; Coyne, Jerry A; Langley, Charles H
2016-08-01
The 1966 GENETICS papers by John Hubby and Richard Lewontin were a landmark in the study of genome-wide levels of variability. They used the technique of gel electrophoresis of enzymes and proteins to study variation in natural populations of Drosophila pseudoobscura, at a set of loci that had been chosen purely for technical convenience, without prior knowledge of their levels of variability. Together with the independent study of human populations by Harry Harris, this seminal study provided the first relatively unbiased picture of the extent of genetic variability in protein sequences within populations, revealing that many genes had surprisingly high levels of diversity. These papers stimulated a large research program that found similarly high electrophoretic variability in many different species and led to statistical tools for interpreting the data in terms of population genetics processes such as genetic drift, balancing and purifying selection, and the effects of selection on linked variants. The current use of whole-genome sequences in studies of variation is the direct descendant of this pioneering work. Copyright © 2016 by the Genetics Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
An accessible method for implementing hierarchical models with spatio-temporal abundance data
Ross, Beth E.; Hooten, Melvin B.; Koons, David N.
2012-01-01
A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
Variable sexually dimorphic gene expression in laboratory strains of Drosophila melanogaster.
Baker, Dean A; Meadows, Lisa A; Wang, Jing; Dow, Julian At; Russell, Steven
2007-12-10
Wild-type laboratory strains of model organisms are typically kept in isolation for many years, with the action of genetic drift and selection on mutational variation causing lineages to diverge with time. Natural populations from which such strains are established, show that gender-specific interactions in particular drive many aspects of sequence level and transcriptional level variation. Here, our goal was to identify genes that display transcriptional variation between laboratory strains of Drosophila melanogaster, and to explore evidence of gender-biased interactions underlying that variability. Transcriptional variation among the laboratory genotypes studied occurs more frequently in males than in females. Qualitative differences are also apparent to suggest that genes within particular functional classes disproportionately display variation in gene expression. Our analysis indicates that genes with reproductive functions are most often divergent between genotypes in both sexes, however a large proportion of female variation can also be attributed to genes without expression in the ovaries. The present study clearly shows that transcriptional variation between common laboratory strains of Drosophila can differ dramatically due to sexual dimorphism. Much of this variation reflects sex-specific challenges associated with divergent physiological trade-offs, morphology and regulatory pathways operating within males and females.
Finite-temperature time-dependent variation with multiple Davydov states
NASA Astrophysics Data System (ADS)
Wang, Lu; Fujihashi, Yuta; Chen, Lipeng; Zhao, Yang
2017-03-01
The Dirac-Frenkel time-dependent variational approach with Davydov Ansätze is a sophisticated, yet efficient technique to obtain an accurate solution to many-body Schrödinger equations for energy and charge transfer dynamics in molecular aggregates and light-harvesting complexes. We extend this variational approach to finite temperature dynamics of the spin-boson model by adopting a Monte Carlo importance sampling method. In order to demonstrate the applicability of this approach, we compare calculated real-time quantum dynamics of the spin-boson model with that from numerically exact iterative quasiadiabatic propagator path integral (QUAPI) technique. The comparison shows that our variational approach with the single Davydov Ansätze is in excellent agreement with the QUAPI method at high temperatures, while the two differ at low temperatures. Accuracy in dynamics calculations employing a multitude of Davydov trial states is found to improve substantially over the single Davydov Ansatz, especially at low temperatures. At a moderate computational cost, our variational approach with the multiple Davydov Ansatz is shown to provide accurate spin-boson dynamics over a wide range of temperatures and bath spectral densities.
Development of an Uncertainty Model for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.
2010-01-01
This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.
Characterization of Surface Reflectance Variation Effects on Remote Sensing
NASA Technical Reports Server (NTRS)
Pearce, W. A.
1984-01-01
The use of Monte Carlo radiative transfer codes to simulate the effects on remote sensing in visible and infrared wavelengths of variables which affect classification is examined. These variables include detector viewing angle, atmospheric aerosol size distribution, aerosol vertical and horizontal distribution (e.g., finite clouds), the form of the bidirectional ground reflectance function, and horizontal variability of reflectance type and reflectivity (albedo). These simulations are used to characterize the sensitivity of observables (intensity and polarization) to variations in the underlying physical parameters both to improve algorithms for the removal of atmospheric effects and to identify techniques which can improve classification accuracy. It was necessary to revise and validate the simulation codes (CTRANS, ARTRAN, and the Mie scattering code) to improve efficiency and accommodate a new operational environment, and to build the basic software tools for acquisition and off-line manipulation of simulation results. Initial calculations compare cases in which increasing amounts of aerosol are shifted into the stratosphere, maintaining a constant optical depth. In the case of moderate aerosol optical depth, the effect on the spread function is to scale it linearly as would be expected from a single scattering model. Varying the viewing angle appears to provide the same qualitative effect as modifying the vertical optical depth (for Lambertian ground reflectance).
A probabilistic bridge safety evaluation against floods.
Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho
2016-01-01
To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.
NASA Technical Reports Server (NTRS)
Stock, Thomas A.
1995-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. The variables in which uncertainties are accounted for include constituent and void volume ratios, constituent elastic properties and strengths, and fiber misalignment. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material property variations induced by random changes expected at the material micro level. Regression results are presented to show the relative correlation between predictor and response variables in the study. These computational procedures make possible a formal description of anticipated random processes at the intraply level, and the related effects of these on composite properties.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
NASA Astrophysics Data System (ADS)
Aguirre, E. E.; Karchewski, B.
2017-12-01
DC resistivity surveying is a geophysical method that quantifies the electrical properties of the subsurface of the earth by applying a source current between two electrodes and measuring potential differences between electrodes at known distances from the source. Analytical solutions for a homogeneous half-space and simple subsurface models are well known, as the former is used to define the concept of apparent resistivity. However, in situ properties are heterogeneous meaning that simple analytical models are only an approximation, and ignoring such heterogeneity can lead to misinterpretation of survey results costing time and money. The present study examines the extent to which random variations in electrical properties (i.e. electrical conductivity) affect potential difference readings and therefore apparent resistivities, relative to an assumed homogeneous subsurface model. We simulate the DC resistivity survey using a Finite Difference (FD) approximation of an appropriate simplification of Maxwell's equations implemented in Matlab. Electrical resistivity values at each node in the simulation were defined as random variables with a given mean and variance, and are assumed to follow a log-normal distribution. The Monte Carlo analysis for a given variance of electrical resistivity was performed until the mean and variance in potential difference measured at the surface converged. Finally, we used the simulation results to examine the relationship between variance in resistivity and variation in surface potential difference (or apparent resistivity) relative to a homogeneous half-space model. For relatively low values of standard deviation in the material properties (<10% of mean), we observed a linear correlation between variance of resistivity and variance in apparent resistivity.
Conspicuous plumage colours are highly variable
Szecsenyi, Beatrice; Nakagawa, Shinichi; Peters, Anne
2017-01-01
Elaborate ornamental traits are often under directional selection for greater elaboration, which in theory should deplete underlying genetic variation. Despite this, many ornamental traits appear to remain highly variable and how this essential variation is maintained is a key question in evolutionary biology. One way to address this question is to compare differences in intraspecific variability across different types of traits to determine whether high levels of variation are associated with specific trait characteristics. Here we assess intraspecific variation in more than 100 plumage colours across 55 bird species to test whether colour variability is linked to their level of elaboration (indicated by degree of sexual dichromatism and conspicuousness) or their condition dependence (indicated by mechanism of colour production). Conspicuous colours had the highest levels of variation and conspicuousness was the strongest predictor of variability, with high explanatory power. After accounting for this, there were no significant effects of sexual dichromatism or mechanisms of colour production. Conspicuous colours may entail higher production costs or may be more sensitive to disruptions during production. Alternatively, high variability could also be related to increased perceptual difficulties inherent to discriminating highly elaborate colours. Such psychophysical effects may constrain the exaggeration of animal colours. PMID:28100823
Conspicuous plumage colours are highly variable.
Delhey, Kaspar; Szecsenyi, Beatrice; Nakagawa, Shinichi; Peters, Anne
2017-01-25
Elaborate ornamental traits are often under directional selection for greater elaboration, which in theory should deplete underlying genetic variation. Despite this, many ornamental traits appear to remain highly variable and how this essential variation is maintained is a key question in evolutionary biology. One way to address this question is to compare differences in intraspecific variability across different types of traits to determine whether high levels of variation are associated with specific trait characteristics. Here we assess intraspecific variation in more than 100 plumage colours across 55 bird species to test whether colour variability is linked to their level of elaboration (indicated by degree of sexual dichromatism and conspicuousness) or their condition dependence (indicated by mechanism of colour production). Conspicuous colours had the highest levels of variation and conspicuousness was the strongest predictor of variability, with high explanatory power. After accounting for this, there were no significant effects of sexual dichromatism or mechanisms of colour production. Conspicuous colours may entail higher production costs or may be more sensitive to disruptions during production. Alternatively, high variability could also be related to increased perceptual difficulties inherent to discriminating highly elaborate colours. Such psychophysical effects may constrain the exaggeration of animal colours. © 2017 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackman, T.M.
1987-01-01
A theoretical investigation of the interaction potential between the helium atom and the antihydrogen atom was performed for the purpose of determining the feasibility of antihydrogen atom containment. The interaction potential showed an energy barrier to collapse of this system. A variational estimate of the height of this energy barrier and estimates of lifetime with respect to electron-positron annihilation were determined by the Variational Monte Carlo method. This calculation allowed for an improvement over an SCF result through the inclusion of explicit correlation factors in the trial wave function. An estimate of the correlation energy of this system was determinedmore » by the Green's Function Monte Carlo (GFMC) method.« less
Event-chain Monte Carlo algorithms for three- and many-particle interactions
NASA Astrophysics Data System (ADS)
Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.
2017-02-01
We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Case, J.B.; Buesch, D.C.
2004-01-01
Predictions of waste canister and repository driftwall temperatures as functions of space and time are important to evaluate pre-closure performance of the proposed repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. Variations in the lithostratigraphic features in densely welded and crystallized rocks of the 12.8-million-year-old Topopah Spring Tuff, especially the porosity resulting from lithophysal cavities, affect thermal properties. A simulated emplacement drift is based on projecting lithophysal cavity porosity values 50 to 800 m from the Enhanced Characterization of the Repository Block cross drift. Lithophysal cavity porosity varies from 0.00 to 0.05 cm3/cm3 in the middle nonlithophysal zone and from 0.03 to 0.28 cm3/cm3 in the lower lithophysal zone. A ventilation model and computer program titled "Monte Carlo Simulation of Ventilation" (MCSIMVENT), which is based on a composite thermal-pulse calculation, simulates statistical variability and uncertainty of rock-mass thermal properties and ventilation performance along a simulated emplacement drift for a pre-closure period of 50 years. Although ventilation efficiency is relatively insensitive to thermal properties, variations in lithophysal porosity along the drift can result in a range of peak driftwall temperatures can range from 40 to 85??C for the preclosure period. Copyright ?? 2004 by ASME.
Use of randomized sampling for analysis of metabolic networks.
Schellenberger, Jan; Palsson, Bernhard Ø
2009-02-27
Genome-scale metabolic network reconstructions in microorganisms have been formulated and studied for about 8 years. The constraint-based approach has shown great promise in analyzing the systemic properties of these network reconstructions. Notably, constraint-based models have been used successfully to predict the phenotypic effects of knock-outs and for metabolic engineering. The inherent uncertainty in both parameters and variables of large-scale models is significant and is well suited to study by Monte Carlo sampling of the solution space. These techniques have been applied extensively to the reaction rate (flux) space of networks, with more recent work focusing on dynamic/kinetic properties. Monte Carlo sampling as an analysis tool has many advantages, including the ability to work with missing data, the ability to apply post-processing techniques, and the ability to quantify uncertainty and to optimize experiments to reduce uncertainty. We present an overview of this emerging area of research in systems biology.
Massively parallel multicanonical simulations
NASA Astrophysics Data System (ADS)
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
Path integral Monte Carlo and the electron gas
NASA Astrophysics Data System (ADS)
Brown, Ethan W.
Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.
The Timescale-dependent Color Variability of Quasars Viewed with /GALEX
NASA Astrophysics Data System (ADS)
Zhu, Fei-Fan; Wang, Jun-Xian; Cai, Zhen-Yi; Sun, Yu-Han
2016-11-01
In a recent work by Sun et al., the color variation of quasars, namely the bluer-when-brighter trend, was found to be timescale dependent using the SDSS g/r band light curves in Stripe 82. Such timescale dependence, I.e., bluer variation at shorter timescales, supports the thermal fluctuation origin of the UV/optical variation in quasars, and can be modeled well with the inhomogeneous accretion disk model. In this paper, we extend the study to much shorter wavelengths in the rest frame (down to extreme UV) using GALaxy Evolution eXplorer (GALEX) photometric data of quasars collected in two ultraviolet bands (near-UV and far-UV). We develop Monte Carlo simulations to correct for possible biases due to the considerably larger photometric uncertainties in the GALEX light curves (particularly in the far-UV, compared with the SDSS g/r bands), which otherwise could produce artificial results. We securely confirm the previously discovered timescale dependence of the color variability with independent data sets and at shorter wavelengths. We further find that the slope of the correlation between the amplitude of the color variation and timescale appears even steeper than predicted by the inhomogeneous disk model, which assumes that disk fluctuations follow a damped random walk (DRW) process. The much flatter structure function observed in the far-UV compared with that at longer wavelengths implies deviation from the DRW process in the inner disk, where rest-frame extreme UV radiation is produced.
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeck, Wim; Parsons, Donald Kent; White, Morgan Curtis
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in themore » details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.« less
Determination of low-Z elements in individual environmental particles using windowless EPMA.
Ro, C U; Osán, J; Van Grieken, R
1999-04-15
The determination of low-Z elements such as carbon, nitrogen, and oxygen in atmospheric aerosol particles is of interest in studying environmental pollution. Conventional electron probe microanalysis technique has a limitation for the determination of the low-Z elements, mainly because the Be window in an energy-dispersive X-ray (EDX) detector hinders the detection of characteristic X-rays from light elements. The feasibility of low-Z element determination in individual particles using a windowless EDX detector is investigated. To develop a method capable of identifying chemical species of individual particles, both the matrix and the geometric effects of particles have to be evaluated. X-rays of low-Z elements generated by an electron beam are so soft that important matrix effects, mostly due to X-ray absorption, exist even within particles in the micrometer size range. Also, the observed radiation, especially that of light elements, experiences different extents of absorption, depending on the shape and size of the particles. Monte Carlo calculation is applied to explain the variation of observed X-ray intensities according to the geometric and chemical compositional variation of individual particles, at different primary electron beam energies. A comparison is carried out between simulated and experimental data, collected for standard individual particles with chemical compositions as generally observed in marine and continental aerosols. Despite the many fundamental problematic analytical factors involved in the observation of X-rays from low-Z elements, the Monte Carlo calculation proves to be quite reliable to evaluate those matrix and geometric effects. Practical aspects of the Monte Carlo calculation for the determination of light elements in individual particles are also considered.
Relative importance of climatic, geographic and socio-economic determinants of malaria in Malawi
2013-01-01
Background Malaria transmission is influenced by variations in meteorological conditions, which impact the biology of the parasite and its vector, but also socio-economic conditions, such as levels of urbanization, poverty and education, which impact human vulnerability and vector habitat. The many potential drivers of malaria, both extrinsic, such as climate, and intrinsic, such as population immunity are often difficult to disentangle. This presents a challenge for the modelling of malaria risk in space and time. Methods A statistical mixed model framework is proposed to model malaria risk at the district level in Malawi, using an age-stratified spatio-temporal dataset of malaria cases from July 2004 to June 2011. Several climatic, geographic and socio-economic factors thought to influence malaria incidence were tested in an exploratory model. In order to account for the unobserved confounding factors that influence malaria, which are not accounted for using measured covariates, a generalized linear mixed model was adopted, which included structured and unstructured spatial and temporal random effects. A hierarchical Bayesian framework using Markov chain Monte Carlo simulation was used for model fitting and prediction. Results Using a stepwise model selection procedure, several explanatory variables were identified to have significant associations with malaria including climatic, cartographic and socio-economic data. Once intervention variations, unobserved confounding factors and spatial correlation were considered in a Bayesian framework, a final model emerged with statistically significant predictor variables limited to average precipitation (quadratic relation) and average temperature during the three months previous to the month of interest. Conclusions When modelling malaria risk in Malawi it is important to account for spatial and temporal heterogeneity and correlation between districts. Once observed and unobserved confounding factors are allowed for, precipitation and temperature in the months prior to the malaria season of interest are found to significantly determine spatial and temporal variations of malaria incidence. Climate information was found to improve the estimation of malaria relative risk in 41% of the districts in Malawi, particularly at higher altitudes where transmission is irregular. This highlights the potential value of climate-driven seasonal malaria forecasts. PMID:24228784
Diffusion Monte Carlo study of strongly interacting two-dimensional Fermi gases
Galea, Alexander; Dawkins, Hillary; Gandolfi, Stefano; ...
2016-02-01
Ultracold atomic Fermi gases have been a popular topic of research, with attention being paid recently to two-dimensional (2D) gases. In this work, we perform T=0 ab initio diffusion Monte Carlo calculations for a strongly interacting two-component Fermi gas confined to two dimensions. We first go over finite-size systems and the connection to the thermodynamic limit. After that, we illustrate pertinent 2D scattering physics and properties of the wave function. We then show energy results for the strong-coupling crossover, in between the Bose-Einstein condensation (BEC) and Bardeen-Cooper-Schrieffer (BCS) regimes. Our energy results for the BEC-BCS crossover are parametrized to producemore » an equation of state, which is used to determine Tan's contact. We carry out a detailed comparison with other microscopic results. Lastly, we calculate the pairing gap for a range of interaction strengths in the strong coupling regime, following from variationally optimized many-body wave functions.« less
NASA Astrophysics Data System (ADS)
Krtičková, I.; Krtička, J.
2018-06-01
Stars that exhibit a B[e] phenomenon comprise a very diverse group of objects in a different evolutionary status. These objects show common spectral characteristics, including the presence of Balmer lines in emission, forbidden lines and strong infrared excess due to dust. Observations of emission lines indicate illumination by an ultraviolet ionizing source, which is key to understanding the elusive nature of these objects. We study the ultraviolet variability of many B[e] stars to specify the geometry of the circumstellar environment and its variability. We analyse massive hot B[e] stars from our Galaxy and from the Magellanic Clouds. We study the ultraviolet broad-band variability derived from the flux-calibrated data. We determine variations of individual lines and the correlation with the total flux variability. We detected variability of the spectral energy distribution and of the line profiles. The variability has several sources of origin, including light absorption by the disc, pulsations, luminous blue variable type variations, and eclipses in the case of binaries. The stellar radiation of most of B[e] stars is heavily obscured by circumstellar material. This suggests that the circumstellar material is present not only in the disc but also above its plane. The flux and line variability is consistent with a two-component model of a circumstellar environment composed of a dense disc and an ionized envelope. Observations of B[e] supergiants show that many of these stars have nearly the same luminosity, about 1.9 × 105 L⊙, and similar effective temperatures.
Andrew D. Richardson; David Y. Hollinger; John D. Aber; Scott V. Ollinger; Bobby H. Braswell
2007-01-01
Tower-based eddy covariance measurements of forest-atmosphere carbon dioxide (CO2) exchange from many sites around the world indicate that there is considerable year-to-year variation in net ecosystem exchange (NEE). Here, we use a statistical modeling approach to partition the interannual variability in NEE (and its component fluxes, ecosystem...
Ground state of excitonic molecules by the Green's-function Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M.A.; Vashishta, P.; Kalia, R.K.
1983-12-26
The ground-state energy of excitonic molecules is evaluated as a function of the ratio of electron and hole masses, sigma, with use of the Green's-function Monte Carlo method. For all sigma, the Green's-function Monte Carlo energies are significantly lower than the variational estimates and in favorable agreement with experiments. In excitonic rydbergs, the binding energy of the positronium molecule (sigma = 1) is predicted to be -0.06 and for sigma<<1, the Green's-function Monte Carlo energies agree with the ''exact'' limiting behavior, E = -2.346+0.764sigma.
EPE analysis of sub-N10 BEoL flow with and without fully self-aligned via using Coventor SEMulator3D
NASA Astrophysics Data System (ADS)
Franke, Joern-Holger; Gallagher, Matt; Murdoch, Gayle; Halder, Sandip; Juncker, Aurelie; Clark, William
2017-03-01
During the last few decades, the semiconductor industry has been able to scale device performance up while driving costs down. What started off as simple geometrical scaling, driven mostly by advances in lithography, has recently been accompanied by advances in processing techniques and in device architectures. The trend to combine efforts using process technology and lithography is expected to intensify, as further scaling becomes ever more difficult. One promising component of future nodes are "scaling boosters", i.e. processing techniques that enable further scaling. An indispensable component in developing these ever more complex processing techniques is semiconductor process modeling software. Visualization of complex 3D structures in SEMulator3D, along with budget analysis on film thicknesses, CD and etch budgets, allow process integrators to compare flows before any physical wafers are run. Hundreds of "virtual" wafers allow comparison of different processing approaches, along with EUV or DUV patterning options for defined layers and different overlay schemes. This "virtual fabrication" technology produces massively parallel process variation studies that would be highly time-consuming or expensive in experiment. Here, we focus on one particular scaling booster, the fully self-aligned via (FSAV). We compare metal-via-metal (mevia-me) chains with self-aligned and fully-self-aligned via's using a calibrated model for imec's N7 BEoL flow. To model overall variability, 3D Monte Carlo modeling of as many variability sources as possible is critical. We use Coventor SEMulator3D to extract minimum me-me distances and contact areas and show how fully self-aligned vias allow a better me-via distance control and tighter via-me contact area variability compared with the standard self-aligned via (SAV) approach.
Reynolds, Richard J; Fenster, Charles B
2008-05-01
Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.
Mercury's Seasonal Sodium Exosphere: MESSENGER Orbital Observations
NASA Technical Reports Server (NTRS)
Cassidy, Timothy A.; Merkel, Aimee W.; Burger, Matthew H.; Sarantos, Menelaos; Killen, Rosemary M.; McClintock, William E.; Vervack, Ronald J., Jr.
2014-01-01
The Mercury Atmospheric and Surface Composition Spectrometer (MASCS) Ultraviolet and Visible Spectrometer (UVVS) on the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft now orbiting Mercury provides the first close-up look at the planet's sodium exosphere. UVVS has observed the exosphere from orbit almost daily for over 10 Mercury years. In this paper we describe and analyze a subset of these data: altitude profiles taken above the low-latitude dayside and south pole. The observations show spatial and temporal variation but there is little or no year-to-year variation; we do not see the episodic variability reported by ground-based observers. We used these altitude profiles to make estimates of sodium density and temperature. The bulk of the exosphere is about 1200 K, much warmer than Mercury's surface. This value is consistent with some ground-based measurements and suggests that photon-stimulated desorption is the primary ejection process. We also observe a tenuous energetic component but do not see evidence of the predicted thermalized (or partially thermalized) sodium near Mercury's surface temperature. Overall we do not see the variable mixture of temperatures predicted by most Monte Carlo models of the exosphere.
Mercury's Seasonal Sodium Exosphere: MESSENGER Orbital Observations
NASA Technical Reports Server (NTRS)
Cassidy, Timothy A.; Merkel, Aimee W.; Burger, Matthew H.; Killen, Rosemary M.; McClintock, William E.; Vervack, Ronald J., Jr.; Sarantos, Menelaos
2014-01-01
The Mercury Atmospheric and Surface Composition Spectrometer (MASCS) Ultraviolet and Visible Spectrometer (UVVS) on the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft now orbiting Mercury provides the first close-up look at the planet's sodium exosphere. UVVS has observed the exosphere from orbit almost daily for over 10 Mercury years. In this paper we describe and analyze a subset of these data: altitude profiles taken above the low-latitude dayside and south pole. The observations show spatial and temporal variations, but there are no obvious year-to-year variations in most of the observations. We do not see the episodic variability reported by some ground-based observers. We used these altitude profiles to make estimates of sodium density and temperature. The bulk of the exosphere, at about 1200 K, is much warmer than Mercury's surface. This value is consistent with some ground-based measurements and suggests that photon-stimulated desorption is the primary ejection process. We also observe a tenuous energetic component but do not see evidence of the predicted thermalized (or partially thermalized) sodium near Mercury's surface temperature. Overall we do not see the variable mixture of temperatures predicted by most Monte Carlo models of the exosphere.
Bayesian spatio-temporal modeling of particulate matter concentrations in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Manga, Edna; Awang, Norhashidah
2016-06-01
This article presents an application of a Bayesian spatio-temporal Gaussian process (GP) model on particulate matter concentrations from Peninsular Malaysia. We analyze daily PM10 concentration levels from 35 monitoring sites in June and July 2011. The spatiotemporal model set in a Bayesian hierarchical framework allows for inclusion of informative covariates, meteorological variables and spatiotemporal interactions. Posterior density estimates of the model parameters are obtained by Markov chain Monte Carlo methods. Preliminary data analysis indicate information on PM10 levels at sites classified as industrial locations could explain part of the space time variations. We include the site-type indicator in our modeling efforts. Results of the parameter estimates for the fitted GP model show significant spatio-temporal structure and positive effect of the location-type explanatory variable. We also compute some validation criteria for the out of sample sites that show the adequacy of the model for predicting PM10 at unmonitored sites.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yanlong
2017-09-01
Considering uncertain geometrical and material parameters, the lower and upper bounds of the band gap of an undulated beam with periodically arched shape are studied by the Monte Carlo Simulation (MCS) and interval analysis based on the Taylor series. Given the random variations of the overall uncertain variables, scatter plots from the MCS are used to analyze the qualitative sensitivities of the band gap respect to these uncertainties. We find that the influence of uncertainty of the geometrical parameter on the band gap of the undulated beam is stronger than that of the material parameter. And this conclusion is also proved by the interval analysis based on the Taylor series. Our methodology can give a strategy to reduce the errors between the design and practical values of the band gaps by improving the accuracy of the specially selected uncertain design variables of the periodical structures.
NASA Technical Reports Server (NTRS)
Loeb, N. G.; Varnai, Tamas; Winker, David M.
1998-01-01
Recent observational studies have shown that satellite retrievals of cloud optical depth based on plane-parallel model theory suffer from systematic biases that depend on viewing geometry, even when observations are restricted to overcast marine stratus layers, arguably the closest to plane parallel in nature. At moderate to low sun elevations, the plane-parallel model significantly overestimates the reflectance dependence on view angle in the forward-scattering direction but shows a similar dependence in the backscattering direction. Theoretical simulations are performed that show that the likely cause for this discrepancy is because the plane-parallel model assumption does not account for subpixel, scale variations in cloud-top height (i.e., "cloud bumps"). Monte Carlo simulation, comparing ID model radiances to radiances from overcast cloud field with 1) cloud-top height variation, but constant cloud volume extinction; 2) flat tops but horizontal variations in cloud volume extinction; and 3) variations in both cloud top height and cloud extinction are performed over a approximately equal to 4 km x 4 km domain (roughly the size of an individual GAC AVHRR pixel). The comparisons show that when cloud-top height variations are included, departures from 1D theory are remarkably similar (qualitatively) to those obtained observationally. In contrast, when clouds are assumed flat and only cloud extinction is variable, reflectance differences are much smaller and do not show any view-angle dependence. When both cloud-top height and cloud extinction variations are included, however, large increases in cloud extinction variability can enhance reflectance difference. The reason 3D-1D reflectance differences are more sensitive to cloud-top height variations in the forward-scattering direction (at moderate to low, sun elevations) is because photons leaving the cloud field in that direction experience fewer scattering events (low-order scattering) and are restricted to the topmost portions of the cloud. While reflectance deviations from 1D theory are much larger for bumpy clouds than for flat clouds with variable cloud extinction, differences in cloud albedo are comparable for these two cases.
2012-10-10
functions) that you define as important outputs of the model. Think of the Monte Carlo simulation approach as picking golf balls out of a large...the large model as a very large basket, wherein many baby baskets reside. Each baby basket has its own set of colored golf balls that are bouncing...around. Sometimes these baby baskets are linked with each other (if there is a correlation between the variables), forcing the golf balls to bounce in
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
Lapotre, Mathieu G.A.; Ehlmann, B. L.; Minson, Sarah E.; Arvidson, R. E.; Ayoub, F.; Fraeman, A. A.; Ewing, R. C.; Bridges, N. T.
2017-01-01
During its ascent up Mount Sharp, the Mars Science Laboratory Curiosity rover traversed the Bagnold Dune Field. We model sand modal mineralogy and grain size at four locations near the rover traverse, using orbital shortwave infrared single scattering albedo spectra and a Markov-Chain Monte Carlo implementation of Hapke's radiative transfer theory to fully constrain uncertainties and permitted solutions. These predictions, evaluated against in situ measurements at one site from the Curiosity rover, show that XRD-measured mineralogy of the basaltic sands is within the 95% confidence interval of model predictions. However, predictions are relatively insensitive to grain size and are non-unique, especially when modeling the composition of minerals with solid solutions. We find an overall basaltic mineralogy and show subtle spatial variations in composition in and around the Bagnold dunes, consistent with a mafic enrichment of sands with cumulative transport distance by sorting of olivine, pyroxene, and plagioclase grains during aeolian saltation. Furthermore, the large variations in Fe and Mg abundances (~20 wt%) at the Bagnold Dunes suggest that compositional variability induced by wind sorting may be enhanced by local mixing with proximal sand sources. Our estimates demonstrate a method for orbital quantification of composition with rigorous uncertainty determination and provide key constraints for interpreting in situ measurements of compositional variability within martian aeolian sandstones.
Nonlinear Network Description for Many-Body Quantum Systems in Continuous Space
NASA Astrophysics Data System (ADS)
Ruggeri, Michele; Moroni, Saverio; Holzmann, Markus
2018-05-01
We show that the recently introduced iterative backflow wave function can be interpreted as a general neural network in continuum space with nonlinear functions in the hidden units. Using this wave function in variational Monte Carlo simulations of liquid 4He in two and three dimensions, we typically find a tenfold increase in accuracy over currently used wave functions. Furthermore, subsequent stages of the iteration procedure define a set of increasingly good wave functions, each with its own variational energy and variance of the local energy: extrapolation to zero variance gives energies in close agreement with the exact values. For two dimensional 4He, we also show that the iterative backflow wave function can describe both the liquid and the solid phase with the same functional form—a feature shared with the shadow wave function, but now joined by much higher accuracy. We also achieve significant progress for liquid 3He in three dimensions, improving previous variational and fixed-node energies.
Monte Carlo explicitly correlated second-order many-body perturbation theory
NASA Astrophysics Data System (ADS)
Johnson, Cole M.; Doran, Alexander E.; Zhang, Jinmei; Valeev, Edward F.; Hirata, So
2016-10-01
A stochastic algorithm is proposed and implemented that computes a basis-set-incompleteness (F12) correction to an ab initio second-order many-body perturbation energy as a short sum of 6- to 15-dimensional integrals of Gaussian-type orbitals, an explicit function of the electron-electron distance (geminal), and its associated excitation amplitudes held fixed at the values suggested by Ten-no. The integrals are directly evaluated (without a resolution-of-the-identity approximation or an auxiliary basis set) by the Metropolis Monte Carlo method. Applications of this method to 17 molecular correlation energies and 12 gas-phase reaction energies reveal that both the nonvariational and variational formulas for the correction give reliable correlation energies (98% or higher) and reaction energies (within 2 kJ mol-1 with a smaller statistical uncertainty) near the complete-basis-set limits by using just the aug-cc-pVDZ basis set. The nonvariational formula is found to be 2-10 times less expensive to evaluate than the variational one, though the latter yields energies that are bounded from below and is, therefore, slightly but systematically more accurate for energy differences. Being capable of using virtually any geminal form, the method confirms the best overall performance of the Slater-type geminal among 6 forms satisfying the same cusp conditions. Not having to precompute lower-dimensional integrals analytically, to store them on disk, or to transform them in a nonscalable dense-matrix-multiplication algorithm, the method scales favorably with both system size and computer size; the cost increases only as O(n4) with the number of orbitals (n), and its parallel efficiency reaches 99.9% of the ideal case on going from 16 to 4096 computer processors.
Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas
2017-05-01
This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.
Pragmatic geometric model evaluation
NASA Astrophysics Data System (ADS)
Pamer, Robert
2015-04-01
Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to calculate basically two model variations that can be seen as geometric extremes of all available input data. This does not lead to a probability distribution for the spatial position of geometric elements but it defines zones of major (or minor resp.) geometric variations due to data uncertainty. Both model evaluations are then analyzed together to give ranges of possible model outcomes in metric units.
Compensation for Lithography Induced Process Variations during Physical Design
NASA Astrophysics Data System (ADS)
Chin, Eric Yiow-Bing
This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.
Mapping local anisotropy axis for scattering media using backscattering Mueller matrix imaging
NASA Astrophysics Data System (ADS)
He, Honghui; Sun, Minghao; Zeng, Nan; Du, E.; Guo, Yihong; He, Yonghong; Ma, Hui
2014-03-01
Mueller matrix imaging techniques can be used to detect the micro-structure variations of superficial biological tissues, including the sizes and shapes of cells, the structures in cells, and the densities of the organelles. Many tissues contain anisotropic fibrous micro-structures, such as collagen fibers, elastin fibers, and muscle fibers. Changes of these fibrous structures are potentially good indicators for some pathological variations. In this paper, we propose a quantitative analysis technique based on Mueller matrix for mapping local anisotropy axis of scattering media. By conducting both experiments on silk sample and Monte Carlo simulation based on the sphere-cylinder scattering model (SCSM), we extract anisotropy axis parameters from different backscattering Mueller matrix elements. Moreover, we testify the possible applications of these parameters for biological tissues. The preliminary experimental results of human cancerous samples show that, these parameters are capable to map the local axis of fibers. Since many pathological changes including early stage cancers affect the well aligned structures for tissues, the experimental results indicate that these parameters can be used as potential tools in clinical applications for biomedical diagnosis purposes.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
Recent advances and future prospects for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less
Influence of monte carlo variance with fluence smoothing in VMAT treatment planning with Monaco TPS.
Sarkar, B; Manikandan, A; Nandy, M; Munshi, A; Sayan, P; Sujatha, N
2016-01-01
The study aimed to investigate the interplay between Monte Carlo Variance (MCV) and fluence smoothing factor (FSF) in volumetric modulated arc therapy treatment planning by using a sample set of complex treatment planning cases and a X-ray Voxel Monte Carlo-based treatment planning system equipped with tools to tune fluence smoothness as well as MCV. The dosimetric (dose to tumor volume, and organ at risk) and physical characteristic (treatment time, number of segments, and so on) of a set 45 treatment plans for all combinations of 1%, 3%, 5% MCV and 1, 3, 5 FSF were evaluated for five carcinoma esophagus cases under the study. Increase in FSF reduce the treatment time. Variation of MCV and FSF gives a highest planning target volume (PTV), heart and lung dose variation of 3.6%, 12.8% and 4.3%, respectively. The heart dose variation was highest among all organs at risk. Highest variation of spinal cord dose was 0.6 Gy. Variation of MCV and FSF influences the organ at risk (OAR) doses significantly but not PTV coverage and dose homogeneity. Variation in FSF causes difference in dosimetric and physical parameters for the treatment plans but variation of MCV does not. MCV 3% or less do not improve the plan quality significantly (physical and clinical) compared with MCV greater than 3%. The use of MCV between 3% and 5% gives similar results as 1% with lesser calculation time. Minimally detected differences in plan quality suggest that the optimum FSF can be set between 3 and 5.
Spatiotemporal drought variability of the eastern Tibetan Plateau during the last millennium
NASA Astrophysics Data System (ADS)
Deng, Yang; Gou, Xiaohua; Gao, Linlin; Yang, Meixue; Zhang, Fen
2017-09-01
Tibetan Plateau is the headwater region of many major Asian rivers and very susceptive to climate change. Therefore, knowledge about climate and its spatiotemporal variability in this area is very important for ecological conservation, water resource management and social development. The aim of this study was to reconstruct and analyze the hydroclimate variation on eastern Tibetan Plateau (ETP) over many centuries and explore possible forcing factors on regional hydroclimate variability. We used 118 tree-ring chronologies from ETP to reconstruct the gridded May-July Standardized Precipitation Evapotranspiration Index for the ETP over the last millennium. The reconstruction was developed using an ensemble point-by-point reconstruction method, and a searching region method was used to locate the candidate tree-ring chronologies. The reconstructions have nicely captured the spatial and temporal features of the regional drought variation. The drought variations in south and north of 32.5°N are notably different, which may be related to the divergence influence of North Atlantic Oscillation on the climate systems in the south and north, as well as differences in local climate. Spectral analysis and series comparison suggest that the drought variation in the northeastern Tibetan Plateau has been possibly influenced by solar activity on centurial and longer time scale.
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Path integral Monte Carlo ground state approach: formalism, implementation, and applications
NASA Astrophysics Data System (ADS)
Yan, Yangqian; Blume, D.
2017-11-01
Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Response Matrix Monte Carlo for electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.
1990-11-01
A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less
Variations in Modeled Dengue Transmission over Puerto Rico Using a Climate Driven Dynamic Model
NASA Technical Reports Server (NTRS)
Morin, Cory; Monaghan, Andrew; Crosson, William; Quattrochi, Dale; Luvall, Jeffrey
2014-01-01
Dengue fever is a mosquito-borne viral disease reemerging throughout much of the tropical Americas. Dengue virus transmission is explicitly influenced by climate and the environment through its primary vector, Aedes aegypti. Temperature regulates Ae. aegypti development, survival, and replication rates as well as the incubation period of the virus within the mosquito. Precipitation provides water for many of the preferred breeding habitats of the mosquito, including buckets, old tires, and other places water can collect. Because of variations in topography, ocean influences and atmospheric processes, temperature and rainfall patterns vary across Puerto Rico and so do dengue virus transmission rates. Using NASA's TRMM (Tropical Rainfall Measuring Mission) satellite for precipitation input, ground-based observations for temperature input, and laboratory confirmed dengue cases reported by the Centers for Disease Control and Prevention for parameter calibration, we modeled dengue transmission at the county level across Puerto Rico from 2010-2013 using a dynamic dengue transmission model that includes interacting vector ecology and epidemiological components. Employing a Monte Carlo approach, we performed ensembles of several thousands of model simulations for each county in order to resolve the model uncertainty arising from using different combinations of parameter values that are not well known. The top 1% of model simulations that best reproduced the reported dengue case data were then analyzed to determine the most important parameters for dengue virus transmission in each county, as well as the relative influence of climate variability on transmission. These results can be used by public health workers to implement dengue control methods that are targeted for specific locations and climate conditions.
THE COLOR VARIABILITY OF QUASARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Kasper B.; Rix, Hans-Walter; Knecht, Matthias
2012-01-10
We quantify quasar color variability using an unprecedented variability database-ugriz photometry of 9093 quasars from Sloan Digital Sky Survey (SDSS) Stripe 82, observed over 8 years at {approx}60 epochs each. We confirm previous reports that quasars become bluer when brightening. We find a redshift dependence of this blueing in a given set of bands (e.g., g and r), but show that it is the result of the flux contribution from less-variable or delayed emission lines in the different SDSS bands at different redshifts. After correcting for this effect, quasar color variability is remarkably uniform, and independent not only of redshift,more » but also of quasar luminosity and black hole mass. The color variations of individual quasars, as they vary in brightness on year timescales, are much more pronounced than the ranges in color seen in samples of quasars across many orders of magnitude in luminosity. This indicates distinct physical mechanisms behind quasar variability and the observed range of quasar luminosities at a given black hole mass-quasar variations cannot be explained by changes in the mean accretion rate. We do find some dependence of the color variability on the characteristics of the flux variations themselves, with fast, low-amplitude, brightness variations producing more color variability. The observed behavior could arise if quasar variability results from flares or ephemeral hot spots in an accretion disk.« less
Quantitative image feature variability amongst CT scanners with a controlled scan protocol
NASA Astrophysics Data System (ADS)
Ger, Rachel B.; Zhou, Shouhao; Chi, Pai-Chun Melinda; Goff, David L.; Zhang, Lifei; Lee, Hannah J.; Fuller, Clifton D.; Howell, Rebecca M.; Li, Heng; Stafford, R. Jason; Court, Laurence E.; Mackin, Dennis S.
2018-02-01
Radiomics studies often analyze patient computed tomography (CT) images acquired from different CT scanners. This may result in differences in imaging parameters, e.g. different manufacturers, different acquisition protocols, etc. However, quantifiable differences in radiomics features can occur based on acquisition parameters. A controlled protocol may allow for minimization of these effects, thus allowing for larger patient cohorts from many different CT scanners. In order to test radiomics feature variability across different CT scanners a radiomics phantom was developed with six different cartridges encased in high density polystyrene. A harmonized protocol was developed to control for tube voltage, tube current, scan type, pitch, CTDIvol, convolution kernel, display field of view, and slice thickness across different manufacturers. The radiomics phantom was imaged on 18 scanners using the control protocol. A linear mixed effects model was created to assess the impact of inter-scanner variability with decomposition of feature variation between scanners and cartridge materials. The inter-scanner variability was compared to the residual variability (the unexplained variability) and to the inter-patient variability using two different patient cohorts. The patient cohorts consisted of 20 non-small cell lung cancer (NSCLC) and 30 head and neck squamous cell carcinoma (HNSCC) patients. The inter-scanner standard deviation was at least half of the residual standard deviation for 36 of 49 quantitative image features. The ratio of inter-scanner to patient coefficient of variation was above 0.2 for 22 and 28 of the 49 features for NSCLC and HNSCC patients, respectively. Inter-scanner variability was a significant factor compared to patient variation in this small study for many of the features. Further analysis with a larger cohort will allow more thorough analysis with additional variables in the model to truly isolate the interscanner difference.
Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.
Allometric constraints to inversion of canopy structure from remote sensing
NASA Astrophysics Data System (ADS)
Wolf, A.; Berry, J. A.; Asner, G. P.
2008-12-01
Canopy radiative transfer models employ a large number of vegetation architectural and leaf biochemical attributes. Studies of leaf biochemistry show a wide array of chemical and spectral diversity that suggests that several leaf biochemical constituents can be independently retrieved from multi-spectral remotely sensed imagery. In contrast, attempts to exploit multi-angle imagery to retrieve canopy structure only succeed in finding two or three of the many unknown canopy arhitectural attributes. We examine a database of over 5000 destructive tree harvests from Eurasia to show that allometry - the covariation of plant form across a broad range of plant size and canopy density - restricts the architectural diversity of plant canopies into a single composite variable ranging from young canopies with many short trees with small crowns to older canopies with fewer trees and larger crowns. Moreover, these architectural attributes are closely linked to biomass via allometric constraints such as the "self-thinning law". We use the measured variance and covariance of plant canopy architecture in these stands to drive the radiative transfer model DISORD, which employs the Li-Strahler geometric optics model. This correlations introduced in the Monte Carlo study are used to determine which attributes of canopy architecture lead to important variation that can be observed by multi-angle or multi-spectral satellite observations, using the sun-view geometry characteristic of MODIS observations in different biomes located at different latitude bands. We conclude that although multi-angle/multi-spectral remote sensing is only sensitive to some of the many unknown canopy attributes that ecologists would wish to know, the strong allometric covariation between these attributes and others permits a large number of inferrences, such as forest biomass, that will be meaningful next-generation vegetation products useful for data assimilation.
Aerocapture Performance Analysis for a Neptune-Triton Exploration Mission
NASA Technical Reports Server (NTRS)
Starr, Brett R.; Westhelle, Carlos H.; Masciarelli, James P.
2004-01-01
A systems analysis has been conducted for a Neptune-Triton Exploration Mission in which aerocapture is used to capture a spacecraft at Neptune. Aerocapture uses aerodynamic drag instead of propulsion to decelerate from the interplanetary approach trajectory to a captured orbit during a single pass through the atmosphere. After capture, propulsion is used to move the spacecraft from the initial captured orbit to the desired science orbit. A preliminary assessment identified that a spacecraft with a lift to drag ratio of 0.8 was required for aerocapture. Performance analyses of the 0.8 L/D vehicle were performed using a high fidelity flight simulation within a Monte Carlo executive to determine mission success statistics. The simulation was the Program to Optimize Simulated Trajectories (POST) modified to include Neptune specific atmospheric and planet models, spacecraft aerodynamic characteristics, and interplanetary trajectory models. To these were added autonomous guidance and pseudo flight controller models. The Monte Carlo analyses incorporated approach trajectory delivery errors, aerodynamic characteristics uncertainties, and atmospheric density variations. Monte Carlo analyses were performed for a reference set of uncertainties and sets of uncertainties modified to produce increased and reduced atmospheric variability. For the reference uncertainties, the 0.8 L/D flatbottom ellipsled vehicle achieves 100% successful capture and has a 99.87 probability of attaining the science orbit with a 360 m/s V budget for apoapsis and periapsis adjustment. Monte Carlo analyses were also performed for a guidance system that modulates both bank angle and angle of attack with the reference set of uncertainties. An alpha and bank modulation guidance system reduces the 99.87 percentile DELTA V 173 m/s (48%) to 187 m/s for the reference set of uncertainties.
Comments on historical variation & desired condition as tools for terrestrial landscape analysis
Constance I. Millar
1997-01-01
Historic (natural or reference) variability and desired condition are key ecosystem-management concepts advocated in many approaches to terrestrial landscape analysis. Historical variation is considered to be a conservative indicator of sustainability. If current conditions are outside the range of historic values, management actions are described to realign the system...
The Known Mix: A Taste of Variation
ERIC Educational Resources Information Center
Canada, Daniel L.
2008-01-01
To create an environment in which all students have opportunities to notice, describe, and wonder about variability, this article takes a context familiar to many teacher--sampling colored chips from a jar--and shows how this context was used to explicitly focus on variation in the classroom. The sampling activity includes physical as well as…
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
Robatjazi, Mostafa; Baghani, Hamid Reza; Mahdavic, Seied Rabi; Felici, Giuseppe
2018-05-01
A shielding disk is used for IOERT procedures to absorb radiation behind the target and protect underlying healthy tissues. Setup variation of shielding disk can affect the corresponding in-vivo dose distribution. In this study, the changes of dosimetric parameters due to the disk setup variations is evaluated using EGSnrc Monte Carlo (MC) code. The results can help treatment team to decide about the level of accuracy in the setup procedure and delivered dose to the target volume during IOERT. Copyright © 2018 Elsevier Ltd. All rights reserved.
Blunt, Nick S.; Neuscamman, Eric
2017-11-16
We present a simple and efficient wave function ansatz for the treatment of excited charge-transfer states in real-space quantum Monte Carlo methods. Using the recently-introduced variation-after-response method, this ansatz allows a crucial orbital optimization step to be performed beyond a configuration interaction singles expansion, while only requiring calculation of two Slater determinant objects. As a result, we demonstrate this ansatz for the illustrative example of the stretched LiF molecule, for a range of excited states of formaldehyde, and finally for the more challenging ethylene-tetrafluoroethylene molecule.
An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.
Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca
2002-09-01
In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.
Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S
2018-06-21
The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.
From stage to age in variable environments: life expectancy and survivorship.
Tuljapurkar, Shripad; Horvitz, Carol C
2006-06-01
Stage-based demographic data are now available on many species of plants and some animals, and they often display temporal and spatial variability. We provide exact formulas to compute age-specific life expectancy and survivorship from stage-based data for three models of temporal variability: cycles, serially independent random variation, and a Markov chain. These models provide a comprehensive description of patterns of temporal variation. Our formulas describe the effects of cohort (birth) environmental condition on mortality at all ages, and of the effects on survivorship of environmental variability experienced over the course of life. This paper complements existing methods for time-invariant stage-based data, and adds to the information on population growth and dynamics available from stochastic demography.
Ball, Gregory F; Balthazart, Jacques
2008-05-12
Investigations of the cellular and molecular mechanisms of physiology and behaviour have generally avoided attempts to explain individual differences. The goal has rather been to discover general processes. However, understanding the causes of individual variation in many phenomena of interest to avian eco-physiologists will require a consideration of such mechanisms. For example, in birds, changes in plasma concentrations of steroid hormones are important in the activation of social behaviours related to reproduction and aggression. Attempts to explain individual variation in these behaviours as a function of variation in plasma hormone concentrations have generally failed. Cellular variables related to the effectiveness of steroid hormone have been useful in some cases. Steroid hormone target sensitivity can be affected by variables such as metabolizing enzyme activity, hormone receptor expression as well as receptor cofactor expression. At present, no general theory has emerged that might provide a clear guidance when trying to explain individual variability in birds or in any other group of vertebrates. One strategy is to learn from studies of large units of intraspecific variation such as population or sex differences to provide ideas about variables that might be important in explaining individual variation. This approach along with the use of newly developed molecular genetic tools represents a promising avenue for avian eco-physiologists to pursue.
A Monte-Carlo game theoretic approach for Multi-Criteria Decision Making under uncertainty
NASA Astrophysics Data System (ADS)
Madani, Kaveh; Lund, Jay R.
2011-05-01
Game theory provides a useful framework for studying Multi-Criteria Decision Making problems. This paper suggests modeling Multi-Criteria Decision Making problems as strategic games and solving them using non-cooperative game theory concepts. The suggested method can be used to prescribe non-dominated solutions and also can be used as a method to predict the outcome of a decision making problem. Non-cooperative stability definitions for solving the games allow consideration of non-cooperative behaviors, often neglected by other methods which assume perfect cooperation among decision makers. To deal with the uncertainty in input variables a Monte-Carlo Game Theory (MCGT) approach is suggested which maps the stochastic problem into many deterministic strategic games. The games are solved using non-cooperative stability definitions and the results include possible effects of uncertainty in input variables on outcomes. The method can handle multi-criteria multi-decision-maker problems with uncertainty. The suggested method does not require criteria weighting, developing a compound decision objective, and accurate quantitative (cardinal) information as it simplifies the decision analysis by solving problems based on qualitative (ordinal) information, reducing the computational burden substantially. The MCGT method is applied to analyze California's Sacramento-San Joaquin Delta problem. The suggested method provides insights, identifies non-dominated alternatives, and predicts likely decision outcomes.
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
The choice of product indicators in latent variable interaction models: post hoc analyses.
Foldnes, Njål; Hagtvet, Knut Arne
2014-09-01
The unconstrained product indicator (PI) approach is a simple and popular approach for modeling nonlinear effects among latent variables. This approach leaves the practitioner to choose the PIs to be included in the model, introducing arbitrariness into the modeling. In contrast to previous Monte Carlo studies, we evaluated the PI approach by 3 post hoc analyses applied to a real-world case adopted from a research effort in social psychology. The measurement design applied 3 and 4 indicators for the 2 latent 1st-order variables, leaving the researcher with a choice among more than 4,000 possible PI configurations. Sixty so-called matched-pair configurations that have been recommended in previous literature are of special interest. In the 1st post hoc analysis we estimated the interaction effect for all PI configurations, keeping the real-world sample fixed. The estimated interaction effect was substantially affected by the choice of PIs, also across matched-pair configurations. Subsequently, a post hoc Monte Carlo study was conducted, with varying sample sizes and data distributions. Convergence, bias, Type I error and power of the interaction test were investigated for each matched-pair configuration and the all-pairs configuration. Variation in estimates across matched-pair configurations for a typical sample was substantial. The choice of specific configuration significantly affected convergence and the interaction test's outcome. The all-pairs configuration performed overall better than the matched-pair configurations. A further advantage of the all-pairs over the matched-pairs approach is its unambiguity. The final study evaluates the all-pairs configuration for small sample sizes and compares it to the non-PI approach of latent moderated structural equations. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Framework for a hydrologic climate-response network in New England
Lent, Robert M.; Hodgkins, Glenn A.; Dudley, Robert W.; Schalk, Luther F.
2015-01-01
Many climate-related hydrologic variables in New England have changed in the past century, and many are expected to change during the next century. It is important to understand and monitor these changes because they can affect human water supply, hydroelectric power generation, transportation infrastructure, and stream and riparian ecology. This report describes a framework for hydrologic monitoring in New England by means of a climate-response network. The framework identifies specific inland hydrologic variables that are sensitive to climate variation; identifies geographic regions with similar hydrologic responses; proposes a fixed-station monitoring network composed of existing streamflow, groundwater, lake ice, snowpack, and meteorological data-collection stations for evaluation of hydrologic response to climate variation; and identifies streamflow basins for intensive, process-based studies and for estimates of future hydrologic conditions.
Impact of Variations on 1-D Flow in Gas Turbine Engines via Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Ngo, Khiem Viet; Tumer, Irem
2004-01-01
The unsteady compressible inviscid flow is characterized by the conservations of mass, momentum, and energy; or simply the Euler equations. In this paper, a study of the subsonic one-dimensional Euler equations with local preconditioning is presented using a modal analysis approach. Specifically, this study investigates the behavior of airflow in a gas turbine engine using the specified conditions at the inflow and outflow boundaries of the compressor, combustion chamber, and turbine, to determine the impact of variations in pressure, velocity, temperature, and density at low Mach numbers. Two main questions motivate this research: 1) Is there any aerodynamic problem with the existing gas turbine engines that could impact aircraft performance? 2) If yes, what aspect of a gas turbine engine could be improved via design to alleviate that impact and to optimize aircraft performance? This paper presents an initial attempt to model the flow behavior in terms of their eigenfrequencies subject to the assumption of the uncertainty or variation (perturbation). The flow behavior is explored using simulation outputs from a customer-deck model obtained from Pratt & Whitney. Variations of the main variables (i.e., pressure, temperature, velocity, density) about their mean states at the inflow and outflow boundaries of the compressor, combustion chamber, and turbine are modeled. Flow behavior is analyzed for the high-pressure compressor and combustion chamber utilizing the conditions on their left and right boundaries. In the same fashion, similar analyses are carried out for the high-pressure and low-pressure turbines. In each case, the eigenfrequencies that are obtained for different boundary conditions are examined closely based on their probabilistic distributions, a result of a Monte Carlo 10,000 sample simulation. Furthermore, the characteristic waves and wave response are analyzed and contrasted among different cases, with and without preconditioners. The results reveal the existence of flow instabilities due to the combined effect of variations and excessive pressures in the case of the combustion chamber and high-pressure turbine. Finally, a discussion is presented on potential impacts of the instabilities and what can be improved via design to alleviate them for a better aircraft performance.
A multi-wavelength search for photometric variability in L dwarfs
NASA Astrophysics Data System (ADS)
Gelino, Christopher Ryan
Previous studies investigating L-dwarf variability have been conducted in the optical I filter. These studies have shown that some, but not all L dwarfs are variable in this filter. In this dissertation I increase the number of L dwarfs observed for variations in the I filter from 10 to 25, with another three from the original ten reexamined here. I find that at least 7 and possibly as many as 12 are variable. One of these variable objects has a puzzling saw-tooth pattern in part of its light curve and another displays a feature that could indicate the creation and dissipation of a large storm. There is no evidence for significant differences between the variable and non- variable objects in their colors, Hα emission, or Li I absorption. I argue that the lack of a correlation between Hα and variability, coupled with low magnetic Reynolds number and ionization fraction in the upper atmosphere, suggests a non-magnetic origin for the variations and favors non-uniform condensate coverage. Furthermore, the absence of significant periodicity in these objects could indicate that these clouds evolve rapidly on timescales of hours to days. In addition to the optical survey, I also present the first multi-wavelength near-infrared search for photometric variations in L dwarfs. I was unable to detect any definite variability in these eleven targets. The upper limits for the amplitude of possible variations suggest that L dwarfs display smaller variations in K than in J and H . The small number of objects on which this conclusion is based and the lack of variability detections underscores the importance of more work in this wavelength regime.
Simulating Silicon Photomultiplier Response to Scintillation Light
Jha, Abhinav K.; van Dam, Herman T.; Kupinski, Matthew A.; Clarkson, Eric
2015-01-01
The response of a Silicon Photomultiplier (SiPM) to optical signals is affected by many factors including photon-detection efficiency, recovery time, gain, optical crosstalk, afterpulsing, dark count, and detector dead time. Many of these parameters vary with overvoltage and temperature. When used to detect scintillation light, there is a complicated non-linear relationship between the incident light and the response of the SiPM. In this paper, we propose a combined discrete-time discrete-event Monte Carlo (MC) model to simulate SiPM response to scintillation light pulses. Our MC model accounts for all relevant aspects of the SiPM response, some of which were not accounted for in the previous models. We also derive and validate analytic expressions for the single-photoelectron response of the SiPM and the voltage drop across the quenching resistance in the SiPM microcell. These analytic expressions consider the effect of all the circuit elements in the SiPM and accurately simulate the time-variation in overvoltage across the microcells of the SiPM. Consequently, our MC model is able to incorporate the variation of the different SiPM parameters with varying overvoltage. The MC model is compared with measurements on SiPM-based scintillation detectors and with some cases for which the response is known a priori. The model is also used to study the variation in SiPM behavior with SiPM-circuit parameter variations and to predict the response of a SiPM-based detector to various scintillators. PMID:26236040
Structural and optical behavior due to thermal effects in end-pumped Yb:YAG disk lasers.
Sazegari, Vahid; Milani, Mohammad Reza Jafari; Jafari, Ahmad Khayat
2010-12-20
We employ a Monte Carlo ray-tracing code along with the ANSYS package to predict the optical and structural behavior in end-pumped CW Yb:YAG disk lasers. The presence of inhomogeneous temperature, stress, and strain distributions is responsible for many deleterious effects for laser action through disk fracture, strain-induced birefringence, and thermal lensing. The thermal lensing, in turn, results in the optical phase distortion in solid-state lasers. Furthermore, the dependence of optical phase distortion on variables such as the heat transfer coefficient, the cooling fluid temperature, and crystal thickness is discussed.
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contessi, L.; Lovato, A.; Pederiva, F.
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
Microwave reflection, transmission, and absorption by human brain tissue
NASA Astrophysics Data System (ADS)
Ansari, M. A.; Akhlaghipour, N.; Zarei, M.; Niknam, A. R.
2018-04-01
These days, the biological effects of electromagnetic (EM) radiations on the brain, especially in the frequency range of mobile communications, have caught the attention of many scientists. Therefore, in this paper, the propagation of mobile phone electromagnetic waves in the brain tissues is investigated analytically and numerically. The brain is modeled by three layers consisting of skull, grey and white matter. First, we have analytically calculated the microwave reflection, transmission, and absorption coefficients using signal flow graph technique. The effect of microwave frequency and variations in the thickness of layers on the propagation of microwave through brain are studied. Then, the penetration of microwave in the layers is numerically investigated by Monte Carlo method. It is shown that the analytical results are in good agreement with those obtained by Monte Carlo method. Our results indicate the absorbed microwave energy depends on microwave frequency and thickness of brain layers, and the absorption coefficient is optimized at a number of frequencies. These findings can be used for comparing the microwave absorbed energy in a child's and adult's brain.
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
Contessi, L.; Lovato, A.; Pederiva, F.; ...
2017-07-26
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
A Sustained Dietary Change Increases Epigenetic Variation in Isogenic Mice
Cowley, Mark J.; Preiss, Thomas; Martin, David I. K.; Suter, Catherine M.
2011-01-01
Epigenetic changes can be induced by adverse environmental exposures, such as nutritional imbalance, but little is known about the nature or extent of these changes. Here we have explored the epigenomic effects of a sustained nutritional change, excess dietary methyl donors, by assessing genomic CpG methylation patterns in isogenic mice exposed for one or six generations. We find stochastic variation in methylation levels at many loci; exposure to methyl donors increases the magnitude of this variation and the number of variable loci. Several gene ontology categories are significantly overrepresented in genes proximal to these methylation-variable loci, suggesting that certain pathways are susceptible to environmental influence on their epigenetic states. Long-term exposure to the diet (six generations) results in a larger number of loci exhibiting epigenetic variability, suggesting that some of the induced changes are heritable. This finding presents the possibility that epigenetic variation within populations can be induced by environmental change, providing a vehicle for disease predisposition and possibly a substrate for natural selection. PMID:21541011
Global characteristics of stream flow seasonality and variability
Dettinger, M.D.; Diaz, Henry F.
2000-01-01
Monthly stream flow series from 1345 sites around the world are used to characterize geographic differences in the seasonality and year-to-year variability of stream flow. Stream flow seasonality varies regionally, depending on the timing of maximum precipitation, evapotranspiration, and contributions from snow and ice. Lags between peaks of precipitation and stream flow vary smoothly from long delays in high-latitude and mountainous regions to short delays in the warmest sectors. Stream flow is most variable from year to year in dry regions of the southwest United States and Mexico, the Sahel, and southern continents, and it varies more (relatively) than precipitation in the same regions. Tropical rivers have the steadiest flows. El Nin??o variations are correlated with stream flow in many parts of the Americas, Europe, and Australia. Many stream flow series from North America, Europe, and the Tropics reflect North Pacific climate, whereas series from the eastern United States, Europe, and tropical South America and Africa reflect North Atlantic climate variations.
Reboredo, Fernando A; Kim, Jeongnim
2014-02-21
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
NASA Astrophysics Data System (ADS)
Reboredo, Fernando A.; Kim, Jeongnim
2014-02-01
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
NASA Astrophysics Data System (ADS)
Davis, A. D.; Heimbach, P.; Marzouk, Y.
2017-12-01
We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.
NASA Astrophysics Data System (ADS)
Cannavacciuolo, Luigi; Skov Pedersen, Jan; Schurtenberger, Peter
2002-03-01
Results of an extensive Monte Carlo (MC) study on both single and many semiflexible charged chains with excluded volume (EV) are summarized. The model employed has been tailored to mimic wormlike micelles in solution. Simulations have been performed at different ionic strengths of added salt, charge densities, chain lengths and volume fractions Φ, covering the dilute to concentrated regime. At infinite dilution the scattering functions can be fitted by the same fitting functions as for uncharged semiflexible chains with EV, provided that an electrostatic contribution bel is added to the bare Kuhn length. The scaling of bel is found to be more complex than the Odijk-Skolnick-Fixman predictions, and qualitatively compatible with more recent variational calculations. Universality in the scaling of the radius of gyration is found if all lengths are rescaled by the total Kuhn length. At finite concentrations, the simple model used is able to reproduce the structural peak in the scattering function S(q) observed in many experiments, as well as other properties of polyelectrolytes (PELs) in solution. Universal behaviour of the forward scattering S(0) is established after a rescaling of Φ. MC data are found to be in very good agreement with experimental scattering measurements with equilibrium PELs, which are giant wormlike micelles formed in mixtures of nonionic and ionic surfactants in dilute aqueous solution, with added salt.
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Spin-driven structural effects in alkali doped (4)He clusters from quantum calculations.
Bovino, S; Coccia, E; Bodo, E; Lopez-Durán, D; Gianturco, F A
2009-06-14
In this paper, we carry out variational Monte Carlo and diffusion Monte Carlo (DMC) calculations for Li(2)((1)Sigma(g) (+))((4)He)(N) and Li(2)((3)Sigma(u) (+))((4)He)(N) with N up to 30 and discuss in detail the results of our computations. After a comparison between our DMC energies with the "exact" discrete variable representation values for the species with one (4)He, in order to test the quality of our computations at 0 K, we analyze the structural features of the whole range of doped clusters. We find that both species reside on the droplet surface, but that their orientation is spin driven, i.e., the singlet molecule is perpendicular and the triplet one is parallel to the droplet's surface. We have also computed quantum vibrational relaxation rates for both dimers in collision with a single (4)He and we find them to differ by orders of magnitude at the estimated surface temperature. Our results therefore confirm the findings from a great number of experimental data present in the current literature and provide one of the first attempts at giving an accurate, fully quantum picture for the nanoscopic properties of alkali dimers in (4)He clusters.
NASA Astrophysics Data System (ADS)
Bell, Stephen C.; Ginsburg, Marc A.; Rao, Prabhakara P.
An important part of space launch vehicle mission planning for a planetary mission is the integrated analysis of guidance and performance dispersions for both booster and upper stage vehicles. For the Mars Observer mission, an integrated trajectory analysis was used to maximize the scientific payload and to minimize injection errors by optimizing the energy management of both vehicles. This was accomplished by designing the Titan III booster vehicle to inject into a hyperbolic departure plane, and the Transfer Orbit Stage (TOS) to correct any booster dispersions. An integrated Monte Carlo analysis of the performance and guidance dispersions of both vehicles provided sensitivities, an evaluation of their guidance schemes and an injection error covariance matrix. The polynomial guidance schemes used for the Titan III variable flight azimuth computations and the TOS solid rocket motor ignition time and burn direction derivations accounted for a wide variation of launch times, performance dispersions, and target conditions. The Mars Observer spacecraft was launched on 25 September 1992 on the Titan III/TOS vehicle. The post flight analysis indicated that a near perfect park orbit injection was achieved, followed by a trans-Mars injection with less than 2sigma errors.
Sanabria, Eduardo; Quiroga, Lorena; Vergara, Cristina; Banchig, Mariana; Rodriguez, Cesar; Ontivero, Emanuel
2018-05-01
Rhinella spinulosa is distributed from Peru to Argentina (from 1200 to 5000 m elevation), inhabiting arid mountain valleys of the Andes, characterized by salty soils. The variations in soil salinity, caused by high evapotranspiration of water, can create an osmotic constraint and high thermal oscillations for metamorphsed Andean toad (R. spinulosa), affecting their thermoregulation and extreme thermal tolerances. We investigated the changes in thermal tolerance parameters (critical thermal maximum and crystallization temperature) of a population of metamorphosed R. spinulosa from the Monte Desert of San Juan, Argentina, under different substrate salinity conditions. Our results suggest that the locomotor performance of metamorphs of R. spinulosa is affected by increasing salinity concentrations in the environment where they develop. On the other hand, the thermal extremes of metamorphs of R. spinulosa also showed changes associated with different salinity conditions. According to other studies on different organisms, the increase of the osmolarity of the internal medium may increase the thermal tolerance of this species. More studies are needed to understand the thermo-osmolar adjustments of the metamorphs of toads to environmental variability. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Optimized MPPT algorithm for boost converters taking into account the environmental variables
NASA Astrophysics Data System (ADS)
Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel
2016-07-01
This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
Method for measuring changes in light absorption of highly scattering media
Bigio, Irving J.; Johnson, Tamara M.; Mourant, Judith R.
2002-01-01
The noninvasive measurement of variations in absorption that are due to changes in concentrations of biochemically relevant compounds in tissue is important in many clinical settings. One problem with such measurements is that the pathlength traveled by the collected light through the tissue depends on the scattering properties of the tissue. It is demonstrated, using both Monte Carlo simulations and experimental measurements, that for an appropriate separation between light-delivery and light-collection fibers, the pathlength of the collected photons is insensitive to scattering parameters for the range of parameters typically found in tissue. This is important for developing rapid, noninvasive, inexpensive, and accurate methods for measuring absorption changes in tissue.
Using altimetry to help explain patchy changes in hydrographic carbon measurements
NASA Astrophysics Data System (ADS)
Rodgers, Keith B.; Key, Robert M.; Gnanadesikan, Anand; Sarmiento, Jorge L.; Aumont, Olivier; Bopp, Laurent; Doney, Scott C.; Dunne, John P.; Glover, David M.; Ishida, Akio; Ishii, Masao; Jacobson, Andrew R.; Lo Monaco, Claire; Maier-Reimer, Ernst; Mercier, Herlé; Metzl, Nicolas; PéRez, Fiz F.; Rios, Aida F.; Wanninkhof, Rik; Wetzel, Patrick; Winn, Christopher D.; Yamanaka, Yasuhiro
2009-09-01
Here we use observations and ocean models to identify mechanisms driving large seasonal to interannual variations in dissolved inorganic carbon (DIC) and dissolved oxygen (O2) in the upper ocean. We begin with observations linking variations in upper ocean DIC and O2 inventories with changes in the physical state of the ocean. Models are subsequently used to address the extent to which the relationships derived from short-timescale (6 months to 2 years) repeat measurements are representative of variations over larger spatial and temporal scales. The main new result is that convergence and divergence (column stretching) attributed to baroclinic Rossby waves can make a first-order contribution to DIC and O2 variability in the upper ocean. This results in a close correspondence between natural variations in DIC and O2 column inventory variations and sea surface height (SSH) variations over much of the ocean. Oceanic Rossby wave activity is an intrinsic part of the natural variability in the climate system and is elevated even in the absence of significant interannual variability in climate mode indices. The close correspondence between SSH and both DIC and O2 column inventories for many regions suggests that SSH changes (inferred from satellite altimetry) may prove useful in reducing uncertainty in separating natural and anthropogenic DIC signals (using measurements from Climate Variability and Predictability's CO2/Repeat Hydrography program).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, B R; McEwen, M R
2015-06-15
Purpose: To investigate uncertainties in small field output factors and detector specific correction factors from variations in field size for nominally identical fields using measurements and Monte Carlo simulations. Methods: Repeated measurements of small field output factors are made with the Exradin W1 (plastic scintillation detector) and the PTW microDiamond (synthetic diamond detector) in beams from the Elekta Precise linear accelerator. We investigate corrections for a 0.6x0.6 cm{sup 2} nominal field size shaped with secondary photon jaws at 100 cm source to surface distance (SSD). Measurements of small field profiles are made in a water phantom at 10 cm depthmore » using both detectors and are subsequently used for accurate detector positioning. Supplementary Monte Carlo simulations with EGSnrc are used to calculate the absorbed dose to the detector and absorbed dose to water under the same conditions when varying field size. The jaws in the BEAMnrc model of the accelerator are varied by a reasonable amount to investigate the same situation without the influence of measurements uncertainties (such as detector positioning or variation in beam output). Results: For both detectors, small field output factor measurements differ by up to 11 % when repeated measurements are made in nominally identical 0.6x0.6 cm{sup 2} fields. Variations in the FWHM of measured profiles are consistent with field size variations reported by the accelerator. Monte Carlo simulations of the dose to detector vary by up to 16 % under worst case variations in field size. These variations are also present in calculations of absorbed dose to water. However, calculated detector specific correction factors are within 1 % when varying field size because of cancellation of effects. Conclusion: Clinical physicists should be aware of potentially significant uncertainties in measured output factors required for dosimetry of small fields due to field size variations for nominally identical fields.« less
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
Statistical Analysis of Tsunami Variability
NASA Astrophysics Data System (ADS)
Zolezzi, Francesca; Del Giudice, Tania; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.
2010-05-01
The purpose of this paper was to investigate statistical variability of seismically generated tsunami impact. The specific goal of the work was to evaluate the variability in tsunami wave run-up due to uncertainty in fault rupture parameters (source effects) and to the effects of local bathymetry at an individual location (site effects). This knowledge is critical to development of methodologies for probabilistic tsunami hazard assessment. Two types of variability were considered: • Inter-event; • Intra-event. Generally, inter-event variability refers to the differences of tsunami run-up at a given location for a number of different earthquake events. The focus of the current study was to evaluate the variability of tsunami run-up at a given point for a given magnitude earthquake. In this case, the variability is expected to arise from lack of knowledge regarding the specific details of the fault rupture "source" parameters. As sufficient field observations are not available to resolve this question, numerical modelling was used to generate run-up data. A scenario magnitude 8 earthquake in the Hellenic Arc was modelled. This is similar to the event thought to have caused the infamous 1303 tsunami. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7° E and 33.8° E. Specific source parameters (e.g. fault rupture length and displacement) were varied, and the effects on wave height were determined. A Monte Carlo approach considering the statistical distribution of the underlying parameters was used to evaluate the variability in wave height at locations along the coast. The results were evaluated in terms of the coefficient of variation of the simulated wave run-up (standard deviation divided by mean value) for each location. The coefficient of variation along the coast was between 0.14 and 3.11, with an average value of 0.67. The variation was higher in areas of irregular coast. This level of variability is similar to that seen in ground motion attenuation correlations used for seismic hazard assessment. The second issue was intra-event variability. This refers to the differences in tsunami wave run-up along a section of coast during a single event. Intra-event variability investigated directly considering field observations. The tsunami events used in the statistical evaluation were selected on the basis of the completeness and reliability of the available data. Tsunami considered for the analysis included the recent and well surveyed tsunami of Boxing Day 2004 (Great Indian Ocean Tsunami), Java 2006, Okushiri 1993, Kocaeli 1999, Messina 1908 and a case study of several historic events in Hawaii. Basic statistical analysis was performed on the field observations from these tsunamis. For events with very wide survey regions, the run-up heights have been grouped in order to maintain a homogeneous distance from the source. Where more than one survey was available for a given event, the original datasets were maintained separately to avoid combination of non-homogeneous data. The observed run-up measurements were used to evaluate the minimum, maximum, average, standard deviation and coefficient of variation for each data set. The minimum coefficient of variation was 0.12 measured for the 2004 Boxing Day tsunami at Nias Island (7 data) while the maximum is 0.98 for the Okushiri 1993 event (93 data). The average coefficient of variation is of the order of 0.45.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
NASA Astrophysics Data System (ADS)
Kim, Jeongnim; Baczewski, Andrew D.; Beaudet, Todd D.; Benali, Anouar; Chandler Bennett, M.; Berrill, Mark A.; Blunt, Nick S.; Josué Landinez Borda, Edgar; Casula, Michele; Ceperley, David M.; Chiesa, Simone; Clark, Bryan K.; Clay, Raymond C., III; Delaney, Kris T.; Dewing, Mark; Esler, Kenneth P.; Hao, Hongxia; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M. Graham; Luo, Ye; Malone, Fionn D.; Martin, Richard M.; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A.; Mitas, Lubos; Morales, Miguel A.; Neuscamman, Eric; Parker, William D.; Pineda Flores, Sergio D.; Romero, Nichols A.; Rubenstein, Brenda M.; Shea, Jacqueline A. R.; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F.; Townsend, Joshua P.; Tubman, Norm M.; Van Der Goetz, Brett; Vincent, Jordan E.; ChangMo Yang, D.; Yang, Yubo; Zhang, Shuai; Zhao, Luning
2018-05-01
QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater–Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.
Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning
2018-05-16
QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.
Uncertainty and operational considerations in mass prophylaxis workforce planning.
Hupert, Nathaniel; Xiong, Wei; King, Kathleen; Castorena, Michelle; Hawkins, Caitlin; Wu, Cindie; Muckstadt, John A
2009-12-01
The public health response to an influenza pandemic or other large-scale health emergency may include mass prophylaxis using multiple points of dispensing (PODs) to deliver countermeasures rapidly to affected populations. Computer models created to date to determine "optimal" staffing levels at PODs typically assume stable patient demand for service. The authors investigated POD function under dynamic and uncertain operational environments. The authors constructed a Monte Carlo simulation model of mass prophylaxis (the Dynamic POD Simulator, or D-PODS) to assess the consequences of nonstationary patient arrival patterns on POD function under a variety of POD layouts and staffing plans. Compared are the performance of a standard POD layout under steady-state and variable patient arrival rates that may mimic real-life variation in patient demand. To achieve similar performance, PODs functioning under nonstationary patient arrival rates require higher staffing levels than would be predicted using the assumption of stationary arrival rates. Furthermore, PODs may develop severe bottlenecks unless staffing levels vary over time to meet changing patient arrival patterns. Efficient POD networks therefore require command and control systems capable of dynamically adjusting intra- and inter-POD staff levels to meet demand. In addition, under real-world operating conditions of heightened uncertainty, fewer large PODs will require a smaller total staff than many small PODs to achieve comparable performance. Modeling environments that capture the effects of fundamental uncertainties in public health disasters are essential for the realistic evaluation of response mechanisms and policies. D-PODS quantifies POD operational efficiency under more realistic conditions than have been modeled previously. The authors' experiments demonstrate that effective POD staffing plans must be responsive to variation and uncertainty in POD arrival patterns. These experiments highlight the need for command and control systems to be created to manage emergency response successfully.
Genetic variation of piperidine alkaloids in Pinus ponderosa: a common garden study.
Gerson, Elizabeth A; Kelsey, Rick G; St Clair, J Bradley
2009-02-01
Previous measurements of conifer alkaloids have revealed significant variation attributable to many sources, environmental and genetic. The present study takes a complementary and intensive, common garden approach to examine genetic variation in Pinus ponderosa var. ponderosa alkaloid production. Additionally, this study investigates the potential trade-off between seedling growth and alkaloid production, and associations between topographic/climatic variables and alkaloid production. Piperidine alkaloids were quantified in foliage of 501 nursery seedlings grown from seed sources in west-central Washington, Oregon and California, roughly covering the western half of the native range of ponderosa pine. A nested mixed model was used to test differences among broad-scale regions and among families within regions. Alkaloid concentrations were regressed on seedling growth measurements to test metabolite allocation theory. Likewise, climate characteristics at the seed sources were also considered as explanatory variables. Quantitative variation from seedling to seedling was high, and regional variation exceeded variation among families. Regions along the western margin of the species range exhibited the highest alkaloid concentrations, while those further east had relatively low alkaloid levels. Qualitative variation in alkaloid profiles was low. All measures of seedling growth related negatively to alkaloid concentrations on a natural log scale; however, coefficients of determination were low. At best, annual height increment explained 19.4 % of the variation in ln(total alkaloids). Among the climate variables, temperature range showed a negative, linear association that explained 41.8 % of the variation. Given the wide geographic scope of the seed sources and the uniformity of resources in the seedlings' environment, observed differences in alkaloid concentrations are evidence for genetic regulation of alkaloid secondary metabolism in ponderosa pine. The theoretical trade-off with seedling growth appeared to be real, however slight. The climate variables provided little evidence for adaptive alkaloid variation, especially within regions.
Variable selection under multiple imputation using the bootstrap in a prognostic study
Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW
2007-01-01
Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboredo, Fernando A.; Kim, Jeongnim
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspacemore » of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.« less
NASA Astrophysics Data System (ADS)
Wang, Wan-Sheng; Xiang, Yuan-Yuan; Wang, Qiang-Hua; Wang, Fa; Yang, Fan; Lee, Dung-Hai
2012-01-01
We study the electronic instabilities of near 1/4 electron doped graphene using the singular-mode functional renormalization group, with a self-adaptive k mesh to improve the treatment of the van Hove singularities, and variational Monte Carlo method. At 1/4 doping the system is a chiral spin-density wave state exhibiting the anomalous quantized Hall effect. When the doping deviates from 1/4, the dx2-y2+idxy Cooper pairing becomes the leading instability. Our results suggest that near 1/4 electron or hole doping (away from the neutral point) the graphene is either a Chern insulator or a topoligical superconductor.
Individual Colorimetric Observer Model
Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent
2016-01-01
This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jeongnim; Reboredo, Fernando A.
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systemsmore » near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.« less
CFRP variable curvature mirror used for realizing non-moving-element optical zoom imaging
NASA Astrophysics Data System (ADS)
Zhao, Hui; Fan, Xuewu; Pang, Zhihai; Ren, Guorui; Wang, Wei; Xie, Yongjie; Ma, Zhen; Du, Yunfei; Su, Yu; Wei, Jingxuan
2014-12-01
In recent years, how to eliminate moving elements while realizing optical zoom imaging has been paid much attention. Compared with the conventional optical zooming techniques, removing moving elements would bring in many benefits such as reduction in weight, volume and power cost and so on. The key to implement non-moving-element optical zooming lies in the design of variable curvature mirror (VCM). In order to obtain big enough optical magnification, the VCM should be capable of generating a large variation of saggitus. Hence, the mirror material should not be brittle, in other words the corresponding ultimate strength should be high enough to ensure that mirror surface would not be broken during large curvature variation. Besides that, the material should have a not too big Young's modulus because in this case less force is required to generate a deformation. Among all available materials, for instance SiC, Zerodur and et.al, CFRP (carbon fiber reinforced polymer) satisfies all these requirements and many related research have proven this. In this paper, a CFRP VCM is designed, fabricated and tested. With a diameter of 100mm, a thickness of 2mm and an initial curvature radius of 1740mm, this component could change its curvature radius from 1705mm to 1760mm, which correspond to a saggitus variation of nearly 23μm. The work reported further proves the suitability of CFRP in constructing variable curvature mirror which could generate a large variation of saggitus.
Power counting to better jet observables
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2014-12-01
Optimized jet substructure observables for identifying boosted topologies will play an essential role in maximizing the physics reach of the Large Hadron Collider. Ideally, the design of discriminating variables would be informed by analytic calculations in perturbative QCD. Unfortunately, explicit calculations are often not feasible due to the complexity of the observables used for discrimination, and so many validation studies rely heavily, and solely, on Monte Carlo. In this paper we show how methods based on the parametric power counting of the dynamics of QCD, familiar from effective theory analyses, can be used to design, understand, and make robust predictions for the behavior of jet substructure variables. As a concrete example, we apply power counting for discriminating boosted Z bosons from massive QCD jets using observables formed from the n-point energy correlation functions. We show that power counting alone gives a definite prediction for the observable that optimally separates the background-rich from the signal-rich regions of phase space. Power counting can also be used to understand effects of phase space cuts and the effect of contamination from pile-up, which we discuss. As these arguments rely only on the parametric scaling of QCD, the predictions from power counting must be reproduced by any Monte Carlo, which we verify using Pythia 8 and Herwig++. We also use the example of quark versus gluon discrimination to demonstrate the limits of the power counting technique.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Long-term optical flux and colour variability in quasars
NASA Astrophysics Data System (ADS)
Sukanya, N.; Stalin, C. S.; Jeyakumar, S.; Praveen, D.; Dhani, Arnab; Damle, R.
2016-02-01
We have used optical V and R band observations from the Massive Compact Halo Object (MACHO) project on a sample of 59 quasars behind the Magellanic clouds to study their long term optical flux and colour variations. These quasars, lying in the redshift range of 0.2 < z < 2.8 and having apparent V band magnitudes between 16.6 and 20.1 mag, have observations ranging from 49 to 1353 epochs spanning over 7.5 yr with frequency of sampling between 2 to 10 days. All the quasars show variability during the observing period. The normalised excess variance (Fvar) in V and R bands are in the range 0.2% < FVvar < 1.6% and 0.1% < FRvar < 1.5% respectively. In a large fraction of the sources, Fvar is larger in the V band compared to the R band. From the z-transformed discrete cross-correlation function analysis, we find that there is no lag between the V and R band variations. Adopting the Markov Chain Monte Carlo (MCMC) approach, and properly taking into account the correlation between the errors in colours and magnitudes, it is found that the majority of sources show a bluer when brighter trend, while a minor fraction of quasars show the opposite behaviour. This is similar to the results obtained from another two independent algorithms, namely the weighted linear least squares fit (FITEXY) and the bivariate correlated errors and intrinsic scatter regression (BCES). However, the ordinary least squares (OLS) fit, normally used in the colour variability studies of quasars, indicates that all the quasars studied here show a bluer when brighter trend. It is therefore very clear that the OLS algorithm cannot be used for the study of colour variability in quasars.
NASA Astrophysics Data System (ADS)
Soundharajan, Bankaru-Swamy; Adeloye, Adebayo J.; Remesan, Renji
2016-07-01
This study employed a Monte-Carlo simulation approach to characterise the uncertainties in climate change induced variations in storage requirements and performance (reliability (time- and volume-based), resilience, vulnerability and sustainability) of surface water reservoirs. Using a calibrated rainfall-runoff (R-R) model, the baseline runoff scenario was first simulated. The R-R inputs (rainfall and temperature) were then perturbed using plausible delta-changes to produce simulated climate change runoff scenarios. Stochastic models of the runoff were developed and used to generate ensembles of both the current and climate-change-perturbed future runoff scenarios. The resulting runoff ensembles were used to force simulation models of the behaviour of the reservoir to produce 'populations' of required reservoir storage capacity to meet demands, and the performance. Comparing these parameters between the current and the perturbed provided the population of climate change effects which was then analysed to determine the variability in the impacts. The methodology was applied to the Pong reservoir on the Beas River in northern India. The reservoir serves irrigation and hydropower needs and the hydrology of the catchment is highly influenced by Himalayan seasonal snow and glaciers, and Monsoon rainfall, both of which are predicted to change due to climate change. The results show that required reservoir capacity is highly variable with a coefficient of variation (CV) as high as 0.3 as the future climate becomes drier. Of the performance indices, the vulnerability recorded the highest variability (CV up to 0.5) while the volume-based reliability was the least variable. Such variabilities or uncertainties will, no doubt, complicate the development of climate change adaptation measures; however, knowledge of their sheer magnitudes as obtained in this study will help in the formulation of appropriate policy and technical interventions for sustaining and possibly enhancing water security for irrigation and other uses served by Pong reservoir.
David Medvigy; Su-Jong Jeong; Kenneth L. Clark; Nicholas S. Skowronski; Karina V. R. Schäfer
2013-01-01
Seasonal variation in photosynthetic capacity is an important part of the overall seasonal variability of temperate deciduous forests. However, it has only recently been introduced in a few terrestrial biosphere models, and many models still do not include it. The biases that result from this omission are not well understood. In this study, we use the Ecosystem...
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Baselining Fugitive and Vented Emissions Across Canadian Energy Developments
NASA Astrophysics Data System (ADS)
O'Connell, L.; Risk, D. A.; Fougère, C. R.; Lavoie, M.; Atherton, E. E.; Baillie, J.; MacKay, K.; Marshall, A. D.
2016-12-01
A recent trilateral accord between North American governments pledges to cut energy sector methane emissions 40-45 per cent below 2012 levels by 2025. Effective methane-reduction policy relies on accurate and spatially extensive emissions data. In this study, we assessed the feasibility of bottom-up data collection for Canadian energy developments, using vehicle-based emission screening and volumetric measurement, combined with forward looking infrared (FLIR) detection for pinpointing source. We analyzed trends across many Canadian developments using an 80,000 km survey campaign conducted in 2015-16 in which CO2, CH4, H2S, and δ13CH4 were measured in proximity to over ten thousand well pads. We found that emissions varied according to infrastructure age, operator size, product, and extraction style. Using these data, we conducted an analysis across several variables to evaluate the potential success of non-exhaustive campaigns for capturing trends, and super-emitters, across the Canadian industry. We found that campaigns would be fiscally feasible, and could be statistically significant depending on scale. However, success was very sensitive to the degree of variation amongst operators and developments, for which we suggest a Monte-Carlo type optimization approach that balances survey coverage with attention to specific localized threats. Similar analyses should be conducted in other accord countries because effective and harmonized oversight could help accelerate emissions reductions.
A seismic hazard uncertainty analysis for the New Madrid seismic zone
Cramer, C.H.
2001-01-01
A review of the scientific issues relevant to characterizing earthquake sources in the New Madrid seismic zone has led to the development of a logic tree of possible alternative parameters. A variability analysis, using Monte Carlo sampling of this consensus logic tree, is presented and discussed. The analysis shows that for 2%-exceedence-in-50-year hazard, the best-estimate seismic hazard map is similar to previously published seismic hazard maps for the area. For peak ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 s (0.2 and 1.0 s Sa), the coefficient of variation (COV) representing the knowledge-based uncertainty in seismic hazard can exceed 0.6 over the New Madrid seismic zone and diminishes to about 0.1 away from areas of seismic activity. Sensitivity analyses show that the largest contributor to PGA, 0.2 and 1.0 s Sa seismic hazard variability is the uncertainty in the location of future 1811-1812 New Madrid sized earthquakes. This is followed by the variability due to the choice of ground motion attenuation relation, the magnitude for the 1811-1812 New Madrid earthquakes, and the recurrence interval for M>6.5 events. Seismic hazard is not very sensitive to the variability in seismogenic width and length. Published by Elsevier Science B.V.
Sarsa, Antonio; Le Sech, Claude
2011-09-13
Variational Monte Carlo method is a powerful tool to determine approximate wave functions of atoms, molecules, and solids up to relatively large systems. In the present work, we extend the variational Monte Carlo approach to study confined systems. Important properties of the atoms, such as the spatial distribution of the electronic charge, the energy levels, or the filling of electronic shells, are modified under confinement. An expression of the energy very similar to the estimator used for free systems is derived. This opens the possibility to study confined systems with little changes in the solution of the corresponding free systems. This is illustrated by the study of helium atom in its ground state (1)S and the first (3)S excited state confined by spherical, cylindrical, and plane impenetrable surfaces. The average interelectronic distances are also calculated. They decrease in general when the confinement is stronger; however, it is seen that they present a minimum for excited states under confinement by open surfaces (cylindrical, planes) around the radii values corresponding to ionization. The ground (2)S and the first (2)P and (2)D excited states of the lithium atom are calculated under spherical constraints for different confinement radii. A crossing between the (2)S and (2)P states is observed around rc = 3 atomic units, illustrating the modification of the atomic energy level under confinement. Finally the carbon atom is studied in the spherical symmetry by using both variational and diffusion Monte Carlo methods. It is shown that the hybridized state sp(3) becomes lower in energy than the ground state (3)P due to a modification and a mixing of the atomic orbitals s, p under strong confinement. This result suggests a model, at least of pedagogical interest, to interpret the basic properties of carbon atom in chemistry.
A Monte Carlo investigation of thrust imbalance of solid rocket motor pairs
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.; Johnson, J. S., Jr.
1974-01-01
A technique is described for theoretical, statistical evaluation of the thrust imbalance of pairs of solid-propellant rocket motors (SRMs) firing in parallel. Sets of the significant variables, determined as a part of the research, are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs. The performance model is upgraded to include the effects of statistical variations in the ovality and alignment of the motor case and mandrel. Effects of cross-correlations of variables are minimized by selecting for the most part completely independent input variables, over forty in number. The imbalance is evaluated in terms of six time - varying parameters as well as eleven single valued ones which themselves are subject to statistical analysis. A sample study of the thrust imbalance of 50 pairs of 146 in. dia. SRMs of the type to be used on the space shuttle is presented. The FORTRAN IV computer program of the analysis and complete instructions for its use are included. Performance computation time for one pair of SRMs is approximately 35 seconds on the IBM 370/155 using the FORTRAN H compiler.
Global-scale modes of surface temperature variability on interannual to century timescales
NASA Technical Reports Server (NTRS)
Mann, Michael E.; Park, Jeffrey
1994-01-01
Using 100 years of global temperature anomaly data, we have performed a singluar value decomposition of temperature variations in narrow frequency bands to isolate coherent spatio-temporal modes of global climate variability. Statistical significance is determined from confidence limits obtained by Monte Carlo simulations. Secular variance is dominated by a globally coherent trend; with nearly all grid points warming in phase at varying amplitude. A smaller, but significant, share of the secular variance corresponds to a pattern dominated by warming and subsequent cooling in the high latitude North Atlantic with a roughly centennial timescale. Spatial patterns associated with significant peaks in variance within a broad period range from 2.8 to 5.7 years exhibit characteristic El Nino-Southern Oscillation (ENSO) patterns. A recent transition to a regime of higher ENSO frequency is suggested by our analysis. An interdecadal mode in the 15-to-18 years period and a mode centered at 7-to-8 years period both exhibit predominantly a North Atlantic Oscillation (NAO) temperature pattern. A potentially significant decadal mode centered on 11-to-12 years period also exhibits an NAO temperature pattern and may be modulated by the century-scale North Atlantic variability.
Phylogenetic, ecological, and allometric correlates of cranial shape in Malagasy lemuriforms.
Baab, Karen L; Perry, Jonathan M G; Rohlf, F James; Jungers, William L
2014-05-01
Adaptive radiations provide important insights into many aspects of evolution, including the relationship between ecology and morphological diversification as well as between ecology and speciation. Many such radiations include divergence along a dietary axis, although other ecological variables may also drive diversification, including differences in diel activity patterns. This study examines the role of two key ecological variables, diet and activity patterns, in shaping the radiation of a diverse clade of primates, the Malagasy lemurs. When phylogeny was ignored, activity pattern and several dietary variables predicted a significant proportion of cranial shape variation. However, when phylogeny was taken into account, only typical diet accounted for a significant proportion of shape variation. One possible explanation for this discrepancy is that this radiation was characterized by a relatively small number of dietary shifts (and possibly changes in body size) that occurred in conjunction with the divergence of major clades. This pattern may be difficult to detect with the phylogenetic comparative methods used here, but may characterize not just lemurs but other mammals. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Fast quantum Monte Carlo on a GPU
NASA Astrophysics Data System (ADS)
Lutsyshyn, Y.
2015-02-01
We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; ...
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
Comparison of flank modification on Ascraeus and Arsia Montes volcanoes, Mars
NASA Technical Reports Server (NTRS)
Zimbelman, James R.
1993-01-01
Geologic mapping of the Tharsis Montes on Mars is in progress as part of the Mars Geologic Mapping Program of NASA. Mapping of the southern flanks of Ascraeus Mons at 1:500,000 scale was undertaken first followed by detailed mapping of Arsia Mons; mapping of Pavonis Mons will begin later this year. Results indicate that each of the Tharsis volcanoes displays unique variations on the general 'theme' of a martian shield volcano. Here we concentrate on the flank characteristics on Ascraeus Mons and Arsia Mons, the northernmost and southernmost of the Tharsis Montes, as illustrative of the most prominent trends.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
Kirby, James B.; Bollen, Kenneth A.
2009-01-01
Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054
Nordey, Thibault; Léchaudel, Mathieu; Génard, Michel; Joas, Jacques
2014-11-01
Managing fruit quality is complex because many different attributes have to be taken into account, which are themselves subjected to spatial and temporal variations. Heterogeneous fruit quality has been assumed to be partly related to temperature and maturity gradients within the fruit. To test this assumption, we measured the spatial variability of certain mango fruit quality traits: colour of the peel and of the flesh, and sourness and sweetness, at different stages of fruit maturity using destructive methods as well as vis-NIR reflectance. The spatial variability of mango quality traits was compared to internal variations in thermal time, simulated by a physical model, and to internal variations in maturity, using ethylene content as an indicator. All the fruit quality indicators analysed showed significant spatial and temporal variations, regardless of the measurement method used. The heterogeneity of internal fruit quality traits was not correlated with the marked internal temperature gradient we modelled. However, variations in ethylene content revealed a strong internal maturity gradient which was correlated with the spatial variations in measured mango quality traits. Nonetheless, alone, the internal maturity gradient did not explain the variability of fruit quality traits, suggesting that other factors, such as gas, abscisic acid and water gradients, are also involved. Copyright © 2014 Elsevier GmbH. All rights reserved.
Gu, Junchen; Stevens, Michael; Xing, Xiaoyun; Li, Daofeng; Zhang, Bo; Payton, Jacqueline E; Oltz, Eugene M; Jarvis, James N; Jiang, Kaiyu; Cicero, Theodore; Costello, Joseph F; Wang, Ting
2016-04-07
DNA methylation is an important epigenetic modification involved in many biological processes and diseases. Many studies have mapped DNA methylation changes associated with embryogenesis, cell differentiation, and cancer at a genome-wide scale. Our understanding of genome-wide DNA methylation changes in a developmental or disease-related context has been steadily growing. However, the investigation of which CpGs are variably methylated in different normal cell or tissue types is still limited. Here, we present an in-depth analysis of 54 single-CpG-resolution DNA methylomes of normal human cell types by integrating high-throughput sequencing-based methylation data. We found that the ratio of methylated to unmethylated CpGs is relatively constant regardless of cell type. However, which CpGs made up the unmethylated complement was cell-type specific. We categorized the 26,000,000 human autosomal CpGs based on their methylation levels across multiple cell types to identify variably methylated CpGs and found that 22.6% exhibited variable DNA methylation. These variably methylated CpGs formed 660,000 variably methylated regions (VMRs), encompassing 11% of the genome. By integrating a multitude of genomic data, we found that VMRs enrich for histone modifications indicative of enhancers, suggesting their role as regulatory elements marking cell type specificity. VMRs enriched for transcription factor binding sites in a tissue-dependent manner. Importantly, they enriched for GWAS variants, suggesting that VMRs could potentially be implicated in disease and complex traits. Taken together, our results highlight the link between CpG methylation variation, genetic variation, and disease risk for many human cell types. Copyright © 2016 Gu et al.
Optimizing Experimental Designs: Finding Hidden Treasure.
USDA-ARS?s Scientific Manuscript database
Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...
Han, Bin; Xu, X. George; Chen, George T. Y.
2011-01-01
Purpose: Monte Carlo methods are used to simulate and optimize a time-resolved proton range telescope (TRRT) in localization of intrafractional and interfractional motions of lung tumor and in quantification of proton range variations. Methods: The Monte Carlo N-Particle eXtended (MCNPX) code with a particle tracking feature was employed to evaluate the TRRT performance, especially in visualizing and quantifying proton range variations during respiration. Protons of 230 MeV were tracked one by one as they pass through position detectors, patient 4DCT phantom, and finally scintillator detectors that measured residual ranges. The energy response of the scintillator telescope was investigated. Mass density and elemental composition of tissues were defined for 4DCT data. Results: Proton water equivalent length (WEL) was deduced by a reconstruction algorithm that incorporates linear proton track and lateral spatial discrimination to improve the image quality. 4DCT data for three patients were used to visualize and measure tumor motion and WEL variations. The tumor trajectories extracted from the WEL map were found to be within ∼1 mm agreement with direct 4DCT measurement. Quantitative WEL variation studies showed that the proton radiograph is a good representation of WEL changes from entrance to distal of the target. Conclusions:MCNPX simulation results showed that TRRT can accurately track the motion of the tumor and detect the WEL variations. Image quality was optimized by choosing proton energy, testing parameters of image reconstruction algorithm, and comparing to ground truth 4DCT. The future study will demonstrate the feasibility of using the time resolved proton radiography as an imaging tool for proton treatments of lung tumors. PMID:21626923
Many-body optimization using an ab initio monte carlo method.
Haubein, Ned C; McMillan, Scott A; Broadbelt, Linda J
2003-01-01
Advances in computing power have made it possible to study solvated molecules using ab initio quantum chemistry. Inclusion of discrete solvent molecules is required to determine geometric information about solute/solvent clusters. Monte Carlo methods are well suited to finding minima in many-body systems, and ab initio methods are applicable to the widest range of systems. A first principles Monte Carlo (FPMC) method was developed to find minima in many-body systems, and emphasis was placed on implementing moves that increase the likelihood of finding minimum energy structures. Partial optimization and molecular interchange moves aid in finding minima and overcome the incomplete sampling that is unavoidable when using ab initio methods. FPMC was validated by studying the boron trifluoride-water system, and then the method was used to examine the methyl carbenium ion in water to demonstrate its application to solvation problems.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, Ricardo A., E-mail: olea@usgs.gov
2011-12-15
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less
Improving multivariate Horner schemes with Monte Carlo tree search
NASA Astrophysics Data System (ADS)
Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.
2013-11-01
Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.
NASA Astrophysics Data System (ADS)
Baracchini, Theo; King, Aaron A.; Bouma, Menno J.; Rodó, Xavier; Bertuzzo, Enrico; Pascual, Mercedes
2017-10-01
Seasonal patterns in cholera dynamics exhibit pronounced variability across geographical regions, showing single or multiple peaks at different times of the year. Although multiple hypotheses related to local climate variables have been proposed, an understanding of this seasonal variation remains incomplete. The historical Bengal region, which encompasses the full range of cholera's seasonality observed worldwide, provides a unique opportunity to gain insights on underlying environmental drivers. Here, we propose a mechanistic, rainfall-temperature driven, stochastic epidemiological model which explicitly accounts for the fluctuations of the aquatic reservoir, and analyze with this model the historical dataset of cholera mortality in the Bengal region. Parameters are inferred with a recently developed sequential Monte Carlo method for likelihood maximization in partially observed Markov processes. Results indicate that the hydrological regime is a major driver of the seasonal dynamics of cholera. Rainfall tends to buffer the propagation of the disease in wet regions due to the longer residence times of water in the environment and an associated dilution effect, whereas it enhances cholera resurgence in dry regions. Moreover, the dynamics of the environmental water reservoir determine whether the seasonality is unimodal or bimodal, as well as its phase relative to the monsoon. Thus, the full range of seasonal patterns can be explained based solely on the local variation of rainfall and temperature. Given the close connection between cholera seasonality and environmental conditions, a deeper understanding of the underlying mechanisms would allow the better management and planning of public health policies with respect to climate variability and climate change.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
NASA Astrophysics Data System (ADS)
Brown, Casey; Carriquiry, Miguel
2007-11-01
This paper explores the performance of a system of economic instruments designed to facilitate the reduction of hydroclimatologic variability-induced impacts on stakeholders of shared water supply. The system is composed of bulk water option contracts between urban water suppliers and agricultural users and insurance indexed on reservoir inflows. The insurance is designed to cover the financial needs of the water supplier in situations where the option is likely to be exercised. Insurance provides the irregularly needed funds for exercising the water options. The combined option contract - reservoir index insurance system creates risk sharing between sectors that is currently lacking in many shared water situations. Contracts are designed for a shared agriculture - urban water system in Metro Manila, Philippines, using optimization and Monte Carlo analysis. Observed reservoir inflows are used to simulate contract performance. Results indicate the option - insurance design effectively smooths water supply costs of hydrologic variability for both agriculture and urban water.
NASA Astrophysics Data System (ADS)
Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-01
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Lorenzi, Juan M; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-28
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Chiral topological phases from artificial neural networks
NASA Astrophysics Data System (ADS)
Kaubruegger, Raphael; Pastori, Lorenzo; Budich, Jan Carl
2018-05-01
Motivated by recent progress in applying techniques from the field of artificial neural networks (ANNs) to quantum many-body physics, we investigate to what extent the flexibility of ANNs can be used to efficiently study systems that host chiral topological phases such as fractional quantum Hall (FQH) phases. With benchmark examples, we demonstrate that training ANNs of restricted Boltzmann machine type in the framework of variational Monte Carlo can numerically solve FQH problems to good approximation. Furthermore, we show by explicit construction how n -body correlations can be kept at an exact level with ANN wave functions exhibiting polynomial scaling with power n in system size. Using this construction, we analytically represent the paradigmatic Laughlin wave function as an ANN state.
Neutrinoless Double Beta Decay Matrix Elements in Light Nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastore, S.; Carlson, J.; Cirigliano, V.
We present the first ab initio calculations of neutrinoless double-β decay matrix elements in A=6-12 nuclei using variational Monte Carlo wave functions obtained from the Argonne v 18 two-nucleon potential and Illinois-7 three-nucleon interaction. We study both light Majorana neutrino exchange and potentials arising from a large class of multi-TeV mechanisms of lepton-number violation. Our results provide benchmarks to be used in testing many-body methods that can be extended to the heavy nuclei of experimental interest. In light nuclei we also study the impact of two-body short-range correlations and the use of different forms for the transition operators, such asmore » those corresponding to different orders in chiral effective theory.« less
Rhythms in the endocrine system of fish: a review.
Cowan, Mairi; Azpeleta, Clara; López-Olmeda, Jose Fernando
2017-12-01
The environment which living organisms inhabit is not constant and many factors, such as light, temperature, and food availability, display cyclic and predictable variations. To adapt to these cyclic changes, animals present biological rhythms in many of their physiological variables, timing their functions to occur when the possibility of success is greatest. Among these variables, many endocrine factors have been described as displaying rhythms in vertebrates. The aim of the present review is to provide a thorough review of the existing knowledge on the rhythms of the endocrine system of fish by examining the hormones that show rhythmicity, how environmental factors control these rhythms and the variation in the responses of the endocrine system depending on the time of the day. We mainly focused on the hypothalamic-pituitary axis, which can be considered as the master axis of the endocrine system of vertebrates and regulates a great variety of functions, including reproduction, growth, metabolism, energy homeostasis, stress response, and osmoregulation. In addition, the rhythms of other hormones, such as melatonin and the factors, produced in the gastrointestinal system of fish are reviewed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai
We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less
Kelly, Greg
2006-12-01
Body temperature is a complex, non-linear data point, subject to many sources of internal and external variation. While these sources of variation significantly complicate interpretation of temperature data, disregarding knowledge in favor of oversimplifying complex issues would represent a significant departure from practicing evidence-based medicine. Part 1 of this review outlines the historical work of Wunderlich on temperature and the origins of the concept that a healthy normal temperature is 98.6 degrees F (37.0 degrees C). Wunderlich's findings and methodology are reviewed and his results are contrasted with findings from modern clinical thermometry. Endogenous sources of temperature variability, including variations caused by site of measurement, circadian, menstrual, and annual biological rhythms, fitness, and aging are discussed. Part 2 will review the effects of exogenous masking agents - external factors in the environment, diet, or lifestyle that can influence body temperature, as well as temperature findings in disease states.
Assessment of Normal Variability in Peripheral Blood Gene Expression
Campbell, Catherine; Vernon, Suzanne D.; Karem, Kevin L.; ...
2002-01-01
Peripheral blood is representative of many systemic processes and is an ideal sample for expression profiling of diseases that have no known or accessible lesion. Peripheral blood is a complex mixture of cell types and some differences in peripheral blood gene expression may reflect the timing of sample collection rather than an underlying disease process. For this reason, it is important to assess study design factors that may cause variability in gene expression not related to what is being analyzed. Variation in the gene expression of circulating peripheral blood mononuclear cells (PBMCs) from three healthy volunteers sampled three times onemore » day each week for one month was examined for 1,176 genes printed on filter arrays. Less than 1% of the genes showed any variation in expression that was related to the time of collection, and none of the changes were noted in more than one individual. These results suggest that observed variation was due to experimental variability.« less
(abstract) Short Time Period Variations in Jupiter's Synchrotron Radiation
NASA Technical Reports Server (NTRS)
Bolton, S. J.; Klein, M. J.; Gulkis, S.; Foster, R.; Heiles, C.; Pater, I. de
1994-01-01
The long term time variability of Jupiter's synchrotron radiation on yearly time scales has been established for some time. For many years, theorists have speculated about the effects variations in the solar wind, solar flux, Io, the Io torus, and Jupiter's magnetic field have on the ultra-relativistic electron population responsible for the emission. Early observational results suggested the additional possibility of a short term time variability, on timescales of days to weeks. In 1989 a program designed to investigate the existence of short term time variability using the 85 foot Hat Creek radio telescope operating at 1400 MHz was initiated. The availability of a dedicated telescope provided the opportunity, for the first time, to obtain numerous observations over the full Jupiter rotation period. These and future observations will enable two important studies, characterization and confirmation of possible short term variations, and the investigation of the stability of Jupiter's synchrotron emission beaming curve. Analysis of Hat Creek observations and early results from the Maryland Point Naval research Laboratory will be presented.
Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.
ERIC Educational Resources Information Center
Oulman, Charles S.; Lee, Motoko Y.
Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
ERIC Educational Resources Information Center
Vasu, Ellen Storey
1978-01-01
The effects of the violation of the assumption of normality in the conditional distributions of the dependent variable, coupled with the condition of multicollinearity upon the outcome of testing the hypothesis that the regression coefficient equals zero, are investigated via a Monte Carlo study. (Author/JKS)
Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula
2011-01-01
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
APOSTLE: 11 TRANSIT OBSERVATIONS OF TrES-3b
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundurthy, P.; Becker, A. C.; Agol, E.
2013-02-10
The Apache Point Survey of Transit Lightcurves of Exoplanets (APOSTLE) observed 11 transits of TrES-3b over two years in order to constrain system parameters and look for transit timing and depth variations. We describe an updated analysis protocol for APOSTLE data, including the reduction pipeline, transit model, and Markov Chain Monte Carlo analyzer. Our estimates of the system parameters for TrES-3b are consistent with previous estimates to within the 2{sigma} confidence level. We improved the errors (by 10%-30%) on system parameters such as the orbital inclination (i {sub orb}), impact parameter (b), and stellar density ({rho}{sub *}) compared to previousmore » measurements. The near-grazing nature of the system, and incomplete sampling of some transits, limited our ability to place reliable uncertainties on individual transit depths and hence we do not report strong evidence for variability. Our analysis of the transit timing data shows no evidence for transit timing variations and our timing measurements are able to rule out super-Earth and gas giant companions in low-order mean motion resonance with TrES-3b.« less
NASA Technical Reports Server (NTRS)
Verger, F. (Principal Investigator); Monget, J. M.; Guerin, O.; Poisson, R. M.; Thomas, Y.
1977-01-01
The author has identified the following significant results. For interpretation of Isle of Jersey imagery, two types of taxons were defined according to their variability in time. On the whole, taxons with a similar spectral signature were opposed to those with strongly varying spectral signature. The taxon types were low diachronic variations and strong diachronic variation. Imagery interpretation was restricted to the landward part of the Fromentine area, including the sand beaches which were often difficult to spectrally separate from the barren coastal dunes in the southern part of Noirmoutier Island as well as along the Breton marsh. From 1972 to 1976, sandbanks reduced in area. Two high river discharge images showed over a two year period an identical outline for the Bilho bank to seaward, whereas upstream, the bank has receeded in the same time to a line joining Paimboeuf to Montoir. The Brillantes bank has receeded at both ends, partly due to dredging operations in the access channel to Donges harbor.
Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes
NASA Astrophysics Data System (ADS)
Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew
2018-03-01
We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.
Bayesian nonparametric dictionary learning for compressed sensing MRI.
Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping
2014-12-01
We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.
Magnetic properties of dendrimer structures with different coordination numbers: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Masrour, R.; Jabar, A.
2016-11-01
We investigate the magnetic properties of Cayley trees of large molecules with dendrimer structure using Monte Carlo simulations. The thermal magnetization and magnetic susceptibility of a dendrimer structure are given with different coordination numbers, Z=3, 4, 5 and different generations g=3 and 2. The variation of magnetizations with the exchange interactions and crystal fields have been given of this system. The magnetic hysteresis cycles have been established.
A Variable-Selection Heuristic for K-Means Clustering.
ERIC Educational Resources Information Center
Brusco, Michael J.; Cradit, J. Dennis
2001-01-01
Presents a variable selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. Subjected the heuristic to Monte Carlo testing across more than 2,200 datasets. Results indicate that the heuristic is extremely effective at eliminating masking variables. (SLD)
Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.; ...
2018-04-19
QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.
QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, DA; Sawakuchi, GO
2014-08-15
In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250more » MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.« less
Environmental Variation Generates Environmental Opportunist Pathogen Outbreaks.
Anttila, Jani; Kaitala, Veijo; Laakso, Jouni; Ruokolainen, Lasse
2015-01-01
Many socio-economically important pathogens persist and grow in the outside host environment and opportunistically invade host individuals. The environmental growth and opportunistic nature of these pathogens has received only little attention in epidemiology. Environmental reservoirs are, however, an important source of novel diseases. Thus, attempts to control these diseases require different approaches than in traditional epidemiology focusing on obligatory parasites. Conditions in the outside-host environment are prone to fluctuate over time. This variation is a potentially important driver of epidemiological dynamics and affect the evolution of novel diseases. Using a modelling approach combining the traditional SIRS models to environmental opportunist pathogens and environmental variability, we show that epidemiological dynamics of opportunist diseases are profoundly driven by the quality of environmental variability, such as the long-term predictability and magnitude of fluctuations. When comparing periodic and stochastic environmental factors, for a given variance, stochastic variation is more likely to cause outbreaks than periodic variation. This is due to the extreme values being further away from the mean. Moreover, the effects of variability depend on the underlying biology of the epidemiological system, and which part of the system is being affected. Variation in host susceptibility leads to more severe pathogen outbreaks than variation in pathogen growth rate in the environment. Positive correlation in variation on both targets can cancel the effect of variation altogether. Moreover, the severity of outbreaks is significantly reduced by increase in the duration of immunity. Uncovering these issues helps in understanding and controlling diseases caused by environmental pathogens.
A probabilistic Hu-Washizu variational principle
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Besterfield, G. H.
1987-01-01
A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.
Predictive and postdictive analysis of forage yield trials
USDA-ARS?s Scientific Manuscript database
Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...
Analytic variance estimates of Swank and Fano factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less
A Monte Carlo model for 3D grain evolution during welding
NASA Astrophysics Data System (ADS)
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-09-01
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.
Spatial patterns of throughfall isotopic composition at the event and seasonal timescales
NASA Astrophysics Data System (ADS)
Allen, Scott T.; Keim, Richard F.; McDonnell, Jeffrey J.
2015-03-01
Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability in event-scale samples, (2) to determine if there are persistent controls over the variability and how these affect variability of seasonally accumulated throughfall, and (3) to analyze the distribution of measured throughfall isotopic composition associated with varying sampling regimes. We measured throughfall over two, three-month periods in western Oregon, USA under a Douglas-fir canopy. The mean spatial range of δ18O for each event was 1.6‰ and 1.2‰ through Fall 2009 (11 events) and Spring 2010 (7 events), respectively. However, the spatial pattern of isotopic composition was not temporally stable causing season-total throughfall to be less variable than event throughfall (1.0‰; range of cumulative δ18O for Fall 2009). Isotopic composition was not spatially autocorrelated and not explained by location relative to tree stems. Sampling error analysis for both field measurements and Monte-Carlo simulated datasets representing different sampling schemes revealed the standard deviation of differences from the true mean as high as 0.45‰ (δ18O) and 1.29‰ (d-excess). The magnitude of this isotopic variation suggests that small sample sizes are a source of substantial experimental error.
Rodrigues-Filho, J L; Abe, D S; Gatti-Junior, P; Medeiros, G R; Degani, R M; Blanco, F P; Faria, C R L; Campanelli, L; Soares, F S; Sidagis-Galli, C V; Teixeira-Silva, V; Tundisi, J E M; Matsmura-Tundisi, T; Tundisi, J G
2015-08-01
The Xingu River, one of the most important of the Amazon Basin, is characterized by clear and transparent waters that drain a 509.685 km2 watershed with distinct hydrological and ecological conditions and anthropogenic pressures along its course. As in other basins of the Amazon system, studies in the Xingu are scarce. Furthermore, the eminent construction of the Belo Monte for hydropower production, which will alter the environmental conditions in the basin in its lower middle portion, denotes high importance of studies that generate relevant information that may subsidize a more balanced and equitable development in the Amazon region. Thus, the aim of this study was to analyze the water quality in the Xingu River and its tributaries focusing on spatial patterns by the use of multivariate statistical techniques, identifying which water quality parameters were more important for the environmental changes in the watershed. Data sampling were carried out during two complete hydrological cycles in twenty-five sampling stations. The data of twenty seven variables were analyzed by Spearman's correlation coefficients, cluster analysis (CA), and principal component analysis (PCA). The results showed a high auto-correlation between variables (> 0.7). These variables were removed from multivariate analyzes because they provided redundant information about the environment. The CA resulted in the formation of six clusters, which were clearly observed in the PCA and were characterized by different water quality. The statistical results allowed to identify a high spatial variation in the water quality, which were related to specific features of the environment, different uses, influences of anthropogenic activities and geochemical characteristics of the drained basins. It was also demonstrated that most of the sampling stations in the Xingu River basin showed good water quality, due to the absence of local impacts and high power of depuration of the river itself.
Jackson, Rod
2017-01-01
Background Many national cardiovascular disease (CVD) risk factor management guidelines now recommend that drug treatment decisions should be informed primarily by patients’ multi-variable predicted risk of CVD, rather than on the basis of single risk factor thresholds. To investigate the potential impact of treatment guidelines based on CVD risk thresholds at a national level requires individual level data representing the multi-variable CVD risk factor profiles for a country’s total adult population. As these data are seldom, if ever, available, we aimed to create a synthetic population, representing the joint CVD risk factor distributions of the adult New Zealand population. Methods and results A synthetic population of 2,451,278 individuals, representing the actual age, gender, ethnicity and social deprivation composition of people aged 30–84 years who completed the 2013 New Zealand census was generated using Monte Carlo sampling. Each ‘synthetic’ person was then probabilistically assigned values of the remaining cardiovascular disease (CVD) risk factors required for predicting their CVD risk, based on data from the national census national hospitalisation and drug dispensing databases and a large regional cohort study, using Monte Carlo sampling and multiple imputation. Where possible, the synthetic population CVD risk distributions for each non-demographic risk factor were validated against independent New Zealand data sources. Conclusions We were able to develop a synthetic national population with realistic multi-variable CVD risk characteristics. The construction of this population is the first step in the development of a micro-simulation model intended to investigate the likely impact of a range of national CVD risk management strategies that will inform CVD risk management guideline updates in New Zealand and elsewhere. PMID:28384217
Knight, Josh; Wells, Susan; Marshall, Roger; Exeter, Daniel; Jackson, Rod
2017-01-01
Many national cardiovascular disease (CVD) risk factor management guidelines now recommend that drug treatment decisions should be informed primarily by patients' multi-variable predicted risk of CVD, rather than on the basis of single risk factor thresholds. To investigate the potential impact of treatment guidelines based on CVD risk thresholds at a national level requires individual level data representing the multi-variable CVD risk factor profiles for a country's total adult population. As these data are seldom, if ever, available, we aimed to create a synthetic population, representing the joint CVD risk factor distributions of the adult New Zealand population. A synthetic population of 2,451,278 individuals, representing the actual age, gender, ethnicity and social deprivation composition of people aged 30-84 years who completed the 2013 New Zealand census was generated using Monte Carlo sampling. Each 'synthetic' person was then probabilistically assigned values of the remaining cardiovascular disease (CVD) risk factors required for predicting their CVD risk, based on data from the national census national hospitalisation and drug dispensing databases and a large regional cohort study, using Monte Carlo sampling and multiple imputation. Where possible, the synthetic population CVD risk distributions for each non-demographic risk factor were validated against independent New Zealand data sources. We were able to develop a synthetic national population with realistic multi-variable CVD risk characteristics. The construction of this population is the first step in the development of a micro-simulation model intended to investigate the likely impact of a range of national CVD risk management strategies that will inform CVD risk management guideline updates in New Zealand and elsewhere.
Groenenberg, Jan E; Koopmans, Gerwin F; Comans, Rob N J
2010-02-15
Ion binding models such as the nonideal competitive adsorption-Donnan model (NICA-Donnan) and model VI successfully describe laboratory data of proton and metal binding to purified humic substances (HS). In this study model performance was tested in more complex natural systems. The speciation predicted with the NICA-Donnan model and the associated uncertainty were compared with independent measurements in soil solution extracts, including the free metal ion activity and fulvic (FA) and humic acid (HA) fractions of dissolved organic matter (DOM). Potentially important sources of uncertainty are the DOM composition and the variation in binding properties of HS. HS fractions of DOM in soil solution extracts varied between 14 and 63% and consisted mainly of FA. Moreover, binding parameters optimized for individual FA samples show substantial variation. Monte Carlo simulations show that uncertainties in predicted metal speciation, for metals with a high affinity for FA (Cu, Pb), are largely due to the natural variation in binding properties (i.e., the affinity) of FA. Predictions for metals with a lower affinity (Cd) are more prone to uncertainties in the fraction FA in DOM and the maximum site density (i.e., the capacity) of the FA. Based on these findings, suggestions are provided to reduce uncertainties in model predictions.
Fixed-node quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Anderson, James B.
Quantum Monte Carlo methods cannot at present provide exact solutions of the Schrödinger equation for systems with more than a few electrons. But, quantum Monte Carlo calculations can provide very low energy, highly accurate solutions for many systems ranging up to several hundred electrons. These systems include atoms such as Be and Fe, molecules such as H2O, CH4, and HF, and condensed materials such as solid N2 and solid silicon. The quantum Monte Carlo predictions of their energies and structures may not be `exact', but they are the best available. Most of the Monte Carlo calculations for these systems have been carried out using approximately correct fixed nodal hypersurfaces and they have come to be known as `fixed-node quantum Monte Carlo' calculations. In this paper we review these `fixed node' calculations and the accuracies they yield.
Genetic variation of piperidine alkaloids in Pinus ponderosa: a common garden study
Gerson, Elizabeth A.; Kelsey, Rick G.; St Clair, J. Bradley
2009-01-01
Background and Aims Previous measurements of conifer alkaloids have revealed significant variation attributable to many sources, environmental and genetic. The present study takes a complementary and intensive, common garden approach to examine genetic variation in Pinus ponderosa var. ponderosa alkaloid production. Additionally, this study investigates the potential trade-off between seedling growth and alkaloid production, and associations between topographic/climatic variables and alkaloid production. Methods Piperidine alkaloids were quantified in foliage of 501 nursery seedlings grown from seed sources in west-central Washington, Oregon and California, roughly covering the western half of the native range of ponderosa pine. A nested mixed model was used to test differences among broad-scale regions and among families within regions. Alkaloid concentrations were regressed on seedling growth measurements to test metabolite allocation theory. Likewise, climate characteristics at the seed sources were also considered as explanatory variables. Key Results Quantitative variation from seedling to seedling was high, and regional variation exceeded variation among families. Regions along the western margin of the species range exhibited the highest alkaloid concentrations, while those further east had relatively low alkaloid levels. Qualitative variation in alkaloid profiles was low. All measures of seedling growth related negatively to alkaloid concentrations on a natural log scale; however, coefficients of determination were low. At best, annual height increment explained 19·4 % of the variation in ln(total alkaloids). Among the climate variables, temperature range showed a negative, linear association that explained 41·8 % of the variation. Conclusions Given the wide geographic scope of the seed sources and the uniformity of resources in the seedlings' environment, observed differences in alkaloid concentrations are evidence for genetic regulation of alkaloid secondary metabolism in ponderosa pine. The theoretical trade-off with seedling growth appeared to be real, however slight. The climate variables provided little evidence for adaptive alkaloid variation, especially within regions. PMID:19010800
Sierra, Carlos A; Loescher, Henry W; Harmon, Mark E; Richardson, Andrew D; Hollinger, David Y; Perakis, Steven S
2009-10-01
Interannual variation of carbon fluxes can be attributed to a number of biotic and abiotic controls that operate at different spatial and temporal scales. Type and frequency of disturbance, forest dynamics, and climate regimes are important sources of variability. Assessing the variability of carbon fluxes from these specific sources can enhance the interpretation of past and current observations. Being able to separate the variability caused by forest dynamics from that induced by climate will also give us the ability to determine if the current observed carbon fluxes are within an expected range or whether the ecosystem is undergoing unexpected change. Sources of interannual variation in ecosystem carbon fluxes from three evergreen ecosystems, a tropical, a temperate coniferous, and a boreal forest, were explored using the simulation model STANDCARB. We identified key processes that introduced variation in annual fluxes, but their relative importance differed among the ecosystems studied. In the tropical site, intrinsic forest dynamics contributed approximately 30% of the total variation in annual carbon fluxes. In the temperate and boreal sites, where many forest processes occur over longer temporal scales than those at the tropical site, climate controlled more of the variation among annual fluxes. These results suggest that climate-related variability affects the rates of carbon exchange differently among sites. Simulations in which temperature, precipitation, and radiation varied from year to year (based on historical records of climate variation) had less net carbon stores than simulations in which these variables were held constant (based on historical records of monthly average climate), a result caused by the functional relationship between temperature and respiration. This suggests that, under a more variable temperature regime, large respiratory pulses may become more frequent and high enough to cause a reduction in ecosystem carbon stores. Our results also show that the variation of annual carbon fluxes poses an important challenge in our ability to determine whether an ecosystem is a source, a sink, or is neutral in regard to CO2 at longer timescales. In simulations where climate change negatively affected ecosystem carbon stores, there was a 20% chance of committing Type II error, even with 20 years of sequential data.
Antihydrogen from positronium impact with cold antiprotons: a Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Cassidy, D. B.; Merrison, J. P.; Charlton, M.; Mitroy, J.; Ryzhikh, G.
1999-04-01
A Monte Carlo simulation of the reaction to form antihydrogen by positronium impact upon antiprotons has been undertaken. Total and differential cross sections have been utilized as inputs to the simulation which models the conditions foreseen in planned antihydrogen formation experiments using positrons and antiprotons held in Penning traps. Thus, predictions of antihydrogen production rates, angular distributions and the variation of the mean antihydrogen temperature as a function of incident positronium kinetic energy have been produced.
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams
NASA Astrophysics Data System (ADS)
Willow, Soohaeng Yoo; Hirata, So
2014-01-01
A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.
Monthly and spatially resolved black carbon emission inventory of India: uncertainty analysis
NASA Astrophysics Data System (ADS)
Paliwal, Umed; Sharma, Mukesh; Burkhart, John F.
2016-10-01
Black carbon (BC) emissions from India for the year 2011 are estimated to be 901.11 ± 151.56 Gg yr-1 based on a new ground-up, GIS-based inventory. The grid-based, spatially resolved emission inventory includes, in addition to conventional sources, emissions from kerosene lamps, forest fires, diesel-powered irrigation pumps and electricity generators at mobile towers. The emissions have been estimated at district level and were spatially distributed onto grids at a resolution of 40 × 40 km2. The uncertainty in emissions has been estimated using a Monte Carlo simulation by considering the variability in activity data and emission factors. Monthly variation of BC emissions has also been estimated to account for the seasonal variability. To the total BC emissions, domestic fuels contributed most significantly (47 %), followed by industry (22 %), transport (17 %), open burning (12 %) and others (2 %). The spatial and seasonal resolution of the inventory will be useful for modeling BC transport in the atmosphere for air quality, global warming and other process-level studies that require greater temporal resolution than traditional inventories.
NASA Astrophysics Data System (ADS)
Kudryavtsev, O.; Rodochenko, V.
2018-03-01
We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.
Entropy as a collective variable
NASA Astrophysics Data System (ADS)
Parrinello, Michele
Sampling complex free energy surfaces that exhibit long lived metastable states separated by kinetic bottlenecks is one of the most pressing issues in the atomistic simulations of matter. Not surprisingly many solutions to this problem have been suggested. Many of them are based on the identification of appropriate collective variables that span the manifold of the slow varying modes of the system. While much effort has been put in devising and even constructing on the fly appropriate collective variables there is still a cogent need of introducing simple, generic, physically transparent, and yet effective collective variables. Motivated by the physical observation that in many case transitions between one metastable state and another result from a trade off between enthalpy and entropy we introduce appropriate collective variables that are able to represent in a simple way these two physical properties. We use these variables in the context of the recently introduced variationally enhanced sampling and apply it them with success to the simulation of crystallization from the liquid and to conformational transitions in protein. Department of Chemistry and Applied Biosciences, ETH Zurich, and Facolta' di Informatica, Istituto di Scienze Computazionali, Universita' della Svizzera Italiana, Via G. Buffi 13, 6900 Lugano, Switzerland.
Longitudinal Variations in the Variability of Spread F Occurrence
NASA Astrophysics Data System (ADS)
Groves, K. M.; Bridgwood, C.; Carrano, C. S.
2017-12-01
The complex dynamics of the equatorial ionosphere have attracted the interest and attention of researchers for many decades. The relatively local processes that give rise to large meridional gradients have been well documented and the associated terminology has entered the common lexicon of ionospheric research (e.g., fountain effect, equatorial anomaly, bubbles, Spread F). Zonal variations have also been noted, principally at the level of determining longitudinal differences in seasonal activity patterns. Due to a historical lack of high resolution ground-based observations at low latitudes, the primary source of data for such analyses has been space-based observations from satellites such as ROCSAT, DMSP, C/NOFS that measure in situ electron density variations. An important longitudinal variation in electron density structure associated with non-migrating diurnal tides was discovered by Immel et al. in 2006 using data from the FUV sensor aboard the NASA IMAGE satellite. These satellite observations have been very helpful in identifying the structural characteristics of the equatorial ionosphere and the occurrence of Spread F, but they provide little insight into variations in scintillation features and potential differences in bubble development characteristics. Moreover space-based studies tend towards the statistics of occurrence frequency over periods of weeks to months. A recent analysis of daily spread F occurrence as determined by low latitude VHF scintillation activity shows that statistical results that are consistent with previous space-based observations, but the level of variability in the occurrence data show marked variations with longitude. For example, the American sector shows very low in-season variability while the African and Asian sectors exhibit true day-to-day variability regardless of seasonal variations. The results have significant implications for space weather as they suggest that long-term forecasts of equatorial scintillation may be meaningful within specific longitude boundaries.
Catto, Cyril; Charest-Tardif, Ginette; Rodriguez, Manuel; Tardif, Robert
2013-01-01
The variability of trihalomethane (THM) levels in drinking water raises the question of whether or not short-term variations (within-day) should be accounted for when assessing exposure to contaminants suspected of being carcinogenic and reprotoxic agents. The purpose of this study was to determine the magnitude of the impact on predicted biological levels of THMs (internal doses) exerted by within-day variations of THMs in drinking water. A database extracted from a campaign in the Québec City distribution system served to produce 81, 79 and 64 concentration profiles for the three most abundant THMs, namely chloroform (TCM), dichlorobromomethane (DCBM) and chlorodibromomethane (CDBM), respectively. Using a physiologically based toxicokinetic modeling approach, we simulated exposures (1.5 l water per day and a 10-min shower) based on each of these profiles and predicted, for 2000 individuals (Monte-Carlo simulations), maximum blood concentrations (Cmax), areas under the time versus blood concentrations curve (24 h-AUCcv) and total absorbed doses (ADs). Three different hypotheses were tested: [A] assuming a constant THM concentration in water (e.g., mean value of a day); [B] accounting for within-day variations in THM levels; and [C] a worst-case scenario assuming within-day variations and showering while THM levels were maximal. For each exposure profile, exposure indicator and individual, we calculated the ratios of values obtained according to each hypothesis (e.g., CmaxB/CmaxA and CmaxC/CmaxA) and the values corresponding to the 5th and 95th percentiles of these ratios. The closer these percentiles are to the value of 1, the smaller the error associated with assuming constant THM concentrations rather than their actual variability. Results showed that the minimal gap between these percentiles was TCM-AD(B)/TCM-AD(A) (5th=0.91; 95th=1.09), whereas the maximal gap was CDBM-Cmax(C)/CDBM-Cmax(A) (5th=0.50; 95th=3.40). Overall, TCM and ADs were the less affected (TCM
Ramet demography of a nurse bromeliad in Brazilian restingas.
Sampaio, Michelle C; Picó, F Xavier; Scarano, Fabio R
2005-04-01
Restingas are sandy coastal plains that stand between the sea and the Brazilian Atlantic forest mountains. The predominant restinga vegetation type in northern Rio de Janeiro, Brazil, is characterized by the formation of islands that begins with colonization by some pioneer herbs and/or woody plants. Pioneer plants are stress-resistant and nurse many other less-resistant plant species. Determining the spatiotemporal variation in the dynamics of nurse plants is essential to understand the ecological functioning of restingas as a whole. The goal of this study was to analyze the spatiotemporal variation in population dynamics of the nurse bromeliad Aechmea nudicaulis. We monitored A. nudicaulis ramets in different habitats, microhabitats, and years. We analyzed the spatiotemporal variation in demographic traits and in population growth rate. Results showed young ramet traits were more variable at the microhabitat level, and when variable, vegetative ramet traits varied at all spatiotemporal scales. Overall, λ values indicated that A. nudicaulis basically remained spatiotemporally stable as most of the λ values did not significantly differ from unity. Hence, the stability of A. nudicaulis in different microhabitats and habitats in the restinga may create several settlement opportunities for many other less-resistant species.
Response of Marine Taxa to Climate Variability in the Southeast U.S.
NASA Astrophysics Data System (ADS)
Morley, J. W.; Pinsky, M. L.; Batt, R. D.
2016-02-01
Climate change has led to large-scale redistributions of marine taxa in many coastal regions around North America. Specifically, marine populations respond to spatial shifts in their preferred temperature conditions, or thermal envelope, as they shift across a seascape. The influence of climate change on the coastal fisheries of the southeast U.S. has been largely unexplored. We analyzed 25 years of trawl survey data (1990-2014) from the Southeast Area Monitoring and Assessment Program (SEAMAP), which samples the nearshore continental shelf of the South Atlantic Bight during spring, summer, and fall. Bottom temperatures exhibited no trend over this period and the assemblage showed no net shift north or south. However, taxa distributions were sensitive to interannual temperature variation. Annual projections of taxa thermal envelopes explained variation in centroid location for many species, particularly during spring. Accordingly, long-term latitudinal shifts in taxa-specific thermal envelopes, which trended to the north or south depending on the species, were highly correlated with centroid shifts during spring. One explanation for our results is that the phenology of taxa migration is adaptable to temperature variation. In particular, the inshore-offshore movement of species during spring and fall appears quite responsive to interannual temperature variability.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.
Fu, Michael J; Cavuşoğlu, M Cenk
2012-12-01
Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Arrieira, Rodrigo Leite; Schwind, Leilane Talita Fatoreto; Joko, Ciro Yoshio; Alves, Geziele Mucio; Velho, Luiz Felipe Machado; Lansac-Tôha, Fábio Amodêo
2016-10-01
Planktonic testate amoebae in floodplains exhibit a broad-range of morphological variability. The variation size is already known, but it is necessary to know how this is for morphological variables. This study aimed to identify the relationships between testate amoebae morphology and environmental factors in four neotropical floodplains. We conducted detailed morphometric analyses on 27 common species of planktonic testate amoebae from genera Arcella, Centropyxis, Cucurbitella, Suiadifflugia, Difflugia, Lesquereusia and Netzelia. We sampled subsurface water from each lake in 72 lakes in four Brazilian floodplain lakes. Our goals were to assess: (1) the range of their morphological variability (a) over space within each floodplain, and (b) among the four floodplains, and (c) over time, and (2) which environmental factors explained this variation. Mean shell height and breadth varied considerably among the different floodplain lakes, especially in the Pantanal and Amazonian floodplains. The morphological variability of testate amoeba was correlated to environmental conditions (ammonia, nitrate, phosphate, chlorophyll-a, turbidity, temperature, and depth). Thus, understanding the morphological variation of the testate amoeba species can elucidate many questions involving the ecology of these organisms. Furthermore, could help molecular studies, bioindicator role of these organisations, environmental reconstruction, among others. Copyright © 2016 Elsevier GmbH. All rights reserved.
Tessonnier, Thomas; Mairani, Andrea; Chen, Wenjing; Sala, Paola; Cerutti, Francesco; Ferrari, Alfredo; Haberer, Thomas; Debus, Jürgen; Parodi, Katia
2018-01-09
Due to their favorable physical and biological properties, helium ion beams are increasingly considered a promising alternative to proton beams for radiation therapy. Hence, this work aims at comparing in-silico the treatment of brain and ocular meningiomas with protons and helium ions, using for the first time a dedicated Monte Carlo (MC) based treatment planning engine (MCTP) thoroughly validated both in terms of physical and biological models. Starting from clinical treatment plans of four patients undergoing proton therapy with a fixed relative biological effectiveness (RBE) of 1.1 and a fraction dose of 1.8 Gy(RBE), new treatment plans were optimized with MCTP for both protons (with variable and fixed RBE) and helium ions (with variable RBE) under the same constraints derived from the initial clinical plans. The resulting dose distributions were dosimetrically compared in terms of dose volume histograms (DVH) parameters for the planning target volume (PTV) and the organs at risk (OARs), as well as dose difference maps. In most of the cases helium ion plans provided a similar PTV coverage as protons with a consistent trend of superior OAR sparing. The latter finding was attributed to the ability of helium ions to offer sharper distal and lateral dose fall-offs, as well as a more favorable differential RBE variation in target and normal tissue. Although more studies are needed to investigate the clinical potential of helium ions for different tumour entities, the results of this work based on an experimentally validated MC engine support the promise of this modality with state-of-the-art pencil beam scanning delivery, especially in case of tumours growing in close proximity of multiple OARs such as meningiomas.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
NASA Astrophysics Data System (ADS)
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J; Owrangi, A; Jiang, R
2014-06-01
Purpose: This study investigated the performance of the anisotropic analytical algorithm (AAA) in dose calculation in radiotherapy concerning a small finger joint. Monte Carlo simulation (EGSnrc code) was used in this dosimetric evaluation. Methods: Heterogeneous finger joint phantom containing a vertical water layer (bone joint or cartilage) sandwiched by two bones with dimension 2 × 2 × 2 cm{sup 3} was irradiated by the 6 MV photon beams (field size = 4 × 4 cm{sup 2}). The central beam axis was along the length of the bone joint and the isocenter was set to the center of the joint. Themore » joint width and beam angle were varied from 0.5–2 mm and 0°–15°, respectively. Depth doses were calculated using the AAA and DOSXYZnrc. For dosimetric comparison and normalization, dose calculations were repeated in water phantom using the same beam geometry. Results: Our AAA and Monte Carlo results showed that the AAA underestimated the joint doses by 10%–20%, and could not predict joint dose variation with changes of joint width and beam angle. The calculated bone dose enhancement for the AAA was lower than Monte Carlo and the depth of maximum dose for the phantom was smaller than that for the water phantom. From Monte Carlo results, there was a decrease of joint dose as its width increased. This reflected the smaller the joint width, the more the bone scatter contributed to the depth dose. Moreover, the joint dose was found slightly decreased with an increase of beam angle. Conclusion: The AAA could not handle variations of joint dose well with changes of joint width and beam angle based on our finger joint phantom. Monte Carlo results showed that the joint dose decreased with increase of joint width and beam angle. This dosimetry comparison should be useful to radiation staff in radiotherapy related to small bone joint.« less
Computing the Polarimetric and Photometric Variability of Be Stars
NASA Astrophysics Data System (ADS)
Marr, K. C.; Jones, C. E.; Halonen, R. J.
2018-01-01
We investigate variations in the linear polarization as well as in the V-band and B-band color–magnitudes for classical Be star disks. We present two models: disks with enhanced disk density and disks that are tilted or warped from the stellar equatorial plane. In both cases, we predict variation in observable properties of the system as the disk rotates. We use a non-LTE radiative transfer code BEDISK (Sigut & Jones) in combination with a Monte Carlo routine that includes multiple scattering (Halonen et al.) to model classical Be star systems. We find that a disk with an enhanced density region that is one order of magnitude denser than the disk’s base density shows as much as ∼ 0.2 % variability in the polarization while the polarization position angle varies by ∼ 8^\\circ . The ΔV magnitude for the same system shows variations of up to ∼ 0.4 mag while the Δ(B–V) color varies by at most ∼ 0.01 mag. We find that disks tilted from the equatorial plane at small angles of ∼ 30^\\circ more strongly reflect the values of polarization and color–magnitudes reported in the literature than disks tilted at larger angles. For this model, the linear polarization varies by ∼ 0.3 % , the polarization position angle varies by ∼ 60^\\circ , the ΔV magnitude varies up to 0.35 mag, and the Δ(B–V) color varies by up to 0.1 mag. We find that the enhanced disk density models show ranges of polarization and color–magnitudes that are commensurate with what is reported in the literature for all sizes of the density-enhanced regions. From this, we cannot determine any preference for small or large density-enhanced regions.
A Representation for Gaining Insight into Clinical Decision Models
Jimison, Holly B.
1988-01-01
For many medical domains uncertainty and patient preferences are important components of decision making. Decision theory is useful as a representation for such medical models in computer decision aids, but the methodology has typically had poor performance in the areas of explanation and user interface. The additional representation of probabilities and utilities as random variables serves to provide a framework for graphical and text insight into complicated decision models. The approach allows for efficient customization of a generic model that describes the general patient population of interest to a patient- specific model. Monte Carlo simulation is used to calculate the expected value of information and sensitivity for each model variable, thus providing a metric for deciding what to emphasize in the graphics and text summary. The computer-generated explanation includes variables that are sensitive with respect to the decision or that deviate significantly from what is typically observed. These techniques serve to keep the assessment and explanation of the patient's decision model concise, allowing the user to focus on the most important aspects for that patient.
Thermal helium clusters at 3.2 Kelvin in classical and semiclassical simulations
NASA Astrophysics Data System (ADS)
Schulte, J.
1993-03-01
The thermodynamic stability of4He4-13 at 3.2 K is investigated with the classical Monte Carlo method, with the semiclassical path-integral Monte Carlo (PIMC) method, and with the semiclassical all-order many-body method. In the all-order many-body simulation the dipole-dipole approximation including short-range correction is used. The resulting stability plots are discussed and related to recent TOF experiments by Stephens and King. It is found that with classical Monte Carlo of course the characteristics of the measured mass spectrum cannot be resolved. With PIMC, switching on more and more quantum mechanics. by raising the number of virtual time steps results in more structure in the stability plot, but this did not lead to sufficient agreement with the TOF experiment. Only the all-order many-body method resolved the characteristic structures of the measured mass spectrum, including magic numbers. The result shows the influence of quantum statistics and quantum mechanics on the stability of small neutral helium clusters.
Monte Carlo treatment planning for molecular targeted radiotherapy within the MINERVA system
NASA Astrophysics Data System (ADS)
Lehmann, Joerg; Hartmann Siantar, Christine; Wessol, Daniel E.; Wemple, Charles A.; Nigg, David; Cogliati, Josh; Daly, Tom; Descalle, Marie-Anne; Flickinger, Terry; Pletcher, David; DeNardo, Gerald
2005-03-01
The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU) and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (modality inclusive environment for radiotherapeutic variable analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plugin architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4—2%, MCNP—10%) (Descalle et al 2003 Cancer Biother. Radiopharm. 18 71-9). The code is currently being benchmarked against experimental data. The interpatient variability of the drug pharmacokinetics in MTR can only be properly accounted for by image-based, patient-specific treatment planning, as has been common in external beam radiation therapy for many years. MINERVA offers 3D Monte Carlo-based MTR treatment planning as its first integrated operational capability. The new MINERVA system will ultimately incorporate capabilities for a comprehensive list of radiation therapies. In progress are modules for external beam photon-electron therapy and boron neutron capture therapy (BNCT). Brachytherapy and proton therapy are planned. Through the open application programming interface (API), other groups can add their own modules and share them with the community.
Learning, climate and the evolution of cultural capacity.
Whitehead, Hal
2007-03-21
Patterns of environmental variation influence the utility, and thus evolution, of different learning strategies. I use stochastic, individual-based evolutionary models to assess the relative advantages of 15 different learning strategies (genetic determination, individual learning, vertical social learning, horizontal/oblique social learning, and contingent combinations of these) when competing in variable environments described by 1/f noise. When environmental variation has little effect on fitness, then genetic determinism persists. When environmental variation is large and equal over all time-scales ("white noise") then individual learning is adaptive. Social learning is advantageous in "red noise" environments when variation over long time-scales is large. Climatic variability increases with time-scale, so that short-lived organisms should be able to rely largely on genetic determination. Thermal climates usually are insufficiently red for social learning to be advantageous for species whose fitness is very determined by temperature. In contrast, population trajectories of many species, especially large mammals and aquatic carnivores, are sufficiently red to promote social learning in their predators. The ocean environment is generally redder than that on land. Thus, while individual learning should be adaptive for many longer-lived organisms, social learning will often be found in those dependent on the populations of other species, especially if they are marine. This provides a potential explanation for the evolution of a prevalence of social learning, and culture, in humans and cetaceans.
Efficient calculation of luminance variation of a luminaire that uses LED light sources
NASA Astrophysics Data System (ADS)
Goldstein, Peter
2007-09-01
Many luminaires have an array of LEDs that illuminate a lenslet-array diffuser in order to create the appearance of a single, extended source with a smooth luminance distribution. Designing such a system is challenging because luminance calculations for a lenslet array generally involve tracing millions of rays per LED, which is computationally intensive and time-consuming. This paper presents a technique for calculating an on-axis luminance distribution by tracing only one ray per LED per lenslet. A multiple-LED system is simulated with this method, and with Monte Carlo ray-tracing software for comparison. Accuracy improves, and computation time decreases by at least five orders of magnitude with this technique, which has applications in LED-based signage, displays, and general illumination.
The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.
Temleitner, László; Pusztai, László; Schweika, Werner
2007-08-22
The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.
Bayesian models for comparative analysis integrating phylogenetic uncertainty.
de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P
2012-06-28
Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602
Thompson, Cynthia L
2016-05-01
Intraspecific variability in social systems is gaining increased recognition in primatology. Many primate species display variability in pair-living social organizations through incorporating extra adults into the group. While numerous models exist to explain primate pair-living, our tools to assess how and why variation in this trait occurs are currently limited. Here I outline an approach which: (i) utilizes conceptual models to identify the selective forces driving pair-living; (ii) outlines novel possible causes for variability in social organization; and (iii) conducts a holistic species-level analysis of social behavior to determine the factors contributing to variation in pair-living. A case study on white-faced sakis (Pithecia pithecia) is used to exemplify this approach. This species lives in either male-female pairs or groups incorporating "extra" adult males and/or females. Various conceptual models of pair-living suggest that high same-sex aggression toward extra-group individuals is a key component of the white-faced saki social system. Variable pair-living in white-faced sakis likely represents alternative strategies to achieve competency in this competition, in which animals experience conflicting selection pressures between achieving successful group defense and maintaining sole reproductive access to mates. Additionally, independent decisions by individuals may generate social variation by preventing other animals from adopting a social organization that maximizes fitness. White-faced saki inter-individual relationships and demographic patterns also lend conciliatory support to this conclusion. By utilizing both model-level and species-level approaches, with a consideration for potential sources of variation, researchers can gain insight into the factors generating variation in pair-living social organizations. © 2014 The Authors. American Journal of Primatology published by Wiley Periodicals, Inc.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Poças, Maria F; Oliveira, Jorge C; Brandsch, Rainer; Hogg, Timothy
2010-07-01
The use of probabilistic approaches in exposure assessments of contaminants migrating from food packages is of increasing interest but the lack of concentration or migration data is often referred as a limitation. Data accounting for the variability and uncertainty that can be expected in migration, for example, due to heterogeneity in the packaging system, variation of the temperature along the distribution chain, and different time of consumption of each individual package, are required for probabilistic analysis. The objective of this work was to characterize quantitatively the uncertainty and variability in estimates of migration. A Monte Carlo simulation was applied to a typical solution of the Fick's law with given variability in the input parameters. The analysis was performed based on experimental data of a model system (migration of Irgafos 168 from polyethylene into isooctane) and illustrates how important sources of variability and uncertainty can be identified in order to refine analyses. For long migration times and controlled conditions of temperature the affinity of the migrant to the food can be the major factor determining the variability in the migration values (more than 70% of variance). In situations where both the time of consumption and temperature can vary, these factors can be responsible, respectively, for more than 60% and 20% of the variance in the migration estimates. The approach presented can be used with databases from consumption surveys to yield a true probabilistic estimate of exposure.
Hall, S J G; Lenstra, J A; Deeming, D C
2012-06-01
Conservation of the intraspecific genetic diversity of livestock species requires protocols that assess between-breed genetic variability and also take into account differences among individuals within breeds. Here, we focus on variation between breeds. Conservation of neutral genetic variation has been seen as promoting, through linkage processes, the retention of useful and potentially useful variation. Using public information on beef cattle breeds, with a total of 165 data sets each relating to a breed comparison of a performance variable, we have tested this paradigm by calculating the correlations between pairwise breed differences in performance and pairwise genetic distances deduced from biochemical and immunological polymorphisms, microsatellites and single-nucleotide polymorphisms. As already observed in floral and faunal biodiversity, significant positive correlations (n=54) were found, but many correlations were non-significant (n=100) or significantly negative (n=11). This implies that maximizing conserved neutral genetic variation with current techniques may conserve breed-level genetic variation in some traits but not in others and supports the view that genetic distance measurements based on neutral genetic variation are not sufficient as a determinant of conservation priority among breeds. © 2011 Blackwell Verlag GmbH.
Solar Variability Magnitudes and Timescales
NASA Astrophysics Data System (ADS)
Kopp, Greg
2015-08-01
The Sun’s net radiative output varies on timescales of minutes to many millennia. The former are directly observed as part of the on-going 37-year long total solar irradiance climate data record, while the latter are inferred from solar proxy and stellar evolution models. Since the Sun provides nearly all the energy driving the Earth’s climate system, changes in the sunlight reaching our planet can have - and have had - significant impacts on life and civilizations.Total solar irradiance has been measured from space since 1978 by a series of overlapping instruments. These have shown changes in the spatially- and spectrally-integrated radiant energy at the top of the Earth’s atmosphere from timescales as short as minutes to as long as a solar cycle. The Sun’s ~0.01% variations over a few minutes are caused by the superposition of convection and oscillations, and even occasionally by a large flare. Over days to weeks, changing surface activity affects solar brightness at the ~0.1% level. The 11-year solar cycle has comparable irradiance variations with peaks near solar maxima.Secular variations are harder to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Proxy models of the Sun based on cosmogenic isotope records and inferred from Earth climate signatures indicate solar brightness changes over decades to millennia, although the magnitude of these variations depends on many assumptions. Stellar evolution affects yet longer timescales and is responsible for the greatest solar variabilities.In this talk I will summarize the Sun’s variability magnitudes over different temporal ranges, showing examples relevant for climate studies as well as detections of exo-solar planets transiting Sun-like stars.
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
NASA Technical Reports Server (NTRS)
Hill, Emma M.; Ponte, Rui M.; Davis, James L.
2007-01-01
Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.
Estimating individual glomerular volume in the human kidney: clinical perspectives.
Puelles, Victor G; Zimanyi, Monika A; Samuel, Terence; Hughson, Michael D; Douglas-Denton, Rebecca N; Bertram, John F; Armitage, James A
2012-05-01
Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin's concordance coefficient (R(C)), coefficient of variation (CV) and coefficient of error (CE) measured reliability. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (R(C) > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
Olea, R.A.
2011-01-01
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. The beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution. ?? 2011 International Association for Mathematical Geology (outside the USA).
X-ray spectral variability of Seyfert 2 galaxies
NASA Astrophysics Data System (ADS)
Hernández-García, L.; Masegosa, J.; González-Martín, O.; Márquez, I.
2015-07-01
Context. Variability across the electromagnetic spectrum is a property of active galactic nuclei (AGN) that can help constrain the physical properties of these galaxies. Nonetheless, the way in which the changes happen and whether they occur in the same way in every AGN are still open questions. Aims: This is the third in a series of papers with the aim of studying the X-ray variability of different families of AGN. The main purpose of this work is to investigate the variability pattern(s) in a sample of optically selected Seyfert 2 galaxies. Methods: We use the 26 Seyfert 2s in the Véron-Cetty and Véron catalog with data available from Chandra and/or XMM-Newton public archives at different epochs, with timescales ranging from a few hours to years. All the spectra of the same source were simultaneously fitted, and we let different parameters vary in the model. Whenever possible, short-term variations from the analysis of the light curves and/or long-term UV flux variations were studied. We divided the sample into Compton-thick and Compton-thin candidates to account for the degree of obscuration. When transitions between Compton-thick and thin were obtained for different observations of the same source, we classified it as a changing-look candidate. Results: Short-term variability at X-rays was studied in ten cases, but variations are not found. From the 25 analyzed sources, 11 show long-term variations. Eight (out of 11) are Compton-thin, one (out of 12) is Compton-thick, and the two changing-look candidates are also variable. The main driver for the X-ray changes is related to the nuclear power (nine cases), while variations at soft energies or related to absorbers at hard X-rays are less common, and in many cases these variations are accompanied by variations in the nuclear continuum. At UV frequencies, only NGC 5194 (out of six sources) is variable, but the changes are not related to the nucleus. We report two changing-look candidates, MARK 273 and NGC 7319. Conclusions: A constant reflection component located far away from the nucleus plus a variable nuclear continuum are able to explain most of our results. Within this scenario, the Compton-thick candidates are dominated by reflection, which suppresses their continuum, making them seem fainter, and they do not show variations (except MARK 3), while the Compton-thin and changing-look candidates do. Appendices are available in electronic form at http://www.aanda.org
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
A Monte Carlo model for 3D grain evolution during welding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bezier curves, which allow formore » the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. Furthermore, the model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.« less
A Monte Carlo model for 3D grain evolution during welding
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-08-04
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bezier curves, which allow formore » the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. Furthermore, the model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.« less
Akkar, Sinan; Boore, David M.
2009-01-01
Most digital accelerograph recordings are plagued by long-period drifts, best seen in the velocity and displacement time series obtained from integration of the acceleration time series. These drifts often result in velocity values that are nonzero near the end of the record. This is clearly unphysical and can lead to inaccurate estimates of peak ground displacement and long-period spectral response. The source of the long-period noise seems to be variations in the acceleration baseline in many cases. These variations could be due to true ground motion (tilting and rotation, as well as local permanent ground deformation), instrumental effects, or analog-to-digital conversion. Very often the trends in velocity are well approximated by a linear trend after the strong shaking subsides. The linearity of the trend in velocity implies that no variations in the baseline could have occurred after the onset of linearity in the velocity time series. This observation, combined with the lack of any trends in the pre-event motion, allows us to compute the time interval in which any baseline variations could occur. We then use several models of the variations in a Monte Carlo procedure to derive a suite of baseline-corrected accelerations for each noise model using records from the 1999 Chi-Chi earthquake and several earthquakes in Turkey. Comparisons of the mean values of the peak ground displacements, spectral displacements, and residual displacements computed from these corrected accelerations for the different noise models can be used as a guide to the accuracy of the baseline corrections. For many of the records considered here the mean values are similar for each noise model, giving confidence in the estimation of the mean values. The dispersion of the ground-motion measures increases with period and is noise-model dependent. The dispersion of inelastic spectra is greater than the elastic spectra at short periods but approaches that of the elastic spectra at longer periods. The elastic spectra from the most basic processing, in which only the pre-event mean is removed from the acceleration time series, do not diverge from the baseline-corrected spectra until periods of 10-20 sec or more for the records studied here, implying that for many engineering purposes elastic spectra can be used from records with no baseline correction or filtering.
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Pandey, R B; Farmer, B L
2014-11-07
Multi-scale aggregation to network formation of interacting proteins (H3.1) are examined by a knowledge-based coarse-grained Monte Carlo simulation as a function of temperature and the number of protein chains, i.e., the concentration of the protein. Self-assembly of corresponding homo-polymers of constitutive residues (Cys, Thr, and Glu) with extreme residue-residue interactions, i.e., attractive (Cys-Cys), neutral (Thr-Thr), and repulsive (Glu-Glu), are also studied for comparison with the native protein. Visual inspections show contrast and similarity in morphological evolutions of protein assembly, aggregation of small aggregates to a ramified network from low to high temperature with the aggregation of a Cys-polymer, and an entangled network of Glu and Thr polymers. Variations in mobility profiles of residues with the concentration of the protein suggest that the segmental characteristic of proteins is altered considerably by the self-assembly from that in its isolated state. The global motion of proteins and Cys polymer chains is enhanced by their interacting network at the low temperature where isolated chains remain quasi-static. Transition from globular to random coil transition, evidenced by the sharp variation in the radius of gyration, of an isolated protein is smeared due to self-assembly of interacting networks of many proteins. Scaling of the structure factor S(q) with the wave vector q provides estimates of effective dimension D of the mass distribution at multiple length scales in self-assembly. Crossover from solid aggregates (D ∼ 3) at low temperature to a ramified fibrous network (D ∼ 2) at high temperature is observed for the protein H3.1 and Cys polymers in contrast to little changes in mass distribution (D ∼ 1.6) of fibrous Glu- and Thr-chain configurations.
NASA Astrophysics Data System (ADS)
Pandey, R. B.; Farmer, B. L.
2014-11-01
Multi-scale aggregation to network formation of interacting proteins (H3.1) are examined by a knowledge-based coarse-grained Monte Carlo simulation as a function of temperature and the number of protein chains, i.e., the concentration of the protein. Self-assembly of corresponding homo-polymers of constitutive residues (Cys, Thr, and Glu) with extreme residue-residue interactions, i.e., attractive (Cys-Cys), neutral (Thr-Thr), and repulsive (Glu-Glu), are also studied for comparison with the native protein. Visual inspections show contrast and similarity in morphological evolutions of protein assembly, aggregation of small aggregates to a ramified network from low to high temperature with the aggregation of a Cys-polymer, and an entangled network of Glu and Thr polymers. Variations in mobility profiles of residues with the concentration of the protein suggest that the segmental characteristic of proteins is altered considerably by the self-assembly from that in its isolated state. The global motion of proteins and Cys polymer chains is enhanced by their interacting network at the low temperature where isolated chains remain quasi-static. Transition from globular to random coil transition, evidenced by the sharp variation in the radius of gyration, of an isolated protein is smeared due to self-assembly of interacting networks of many proteins. Scaling of the structure factor S(q) with the wave vector q provides estimates of effective dimension D of the mass distribution at multiple length scales in self-assembly. Crossover from solid aggregates (D ˜ 3) at low temperature to a ramified fibrous network (D ˜ 2) at high temperature is observed for the protein H3.1 and Cys polymers in contrast to little changes in mass distribution (D ˜ 1.6) of fibrous Glu- and Thr-chain configurations.
Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo
NASA Astrophysics Data System (ADS)
Qin, Junsong; Liu, Bingyi; Niu, Dongxiao
By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
2015-09-30
hormones and function in elephant seals; 3) determine the impact of baseline variation in aldosterone on electrolyte balance in elephant seals; 4...3 – Impact of aldosterone variability on osmolality Work on the Parent Project and a parallel project on bottlenose dolphins has shown the...importance of aldosterone as a stress hormone in marine mammals. Aldosterone covaries with cortisol in many groups (Figure 4) and ACTH challenges in the
Batterman, Stuart
2015-01-01
Patterns of traffic activity, including changes in the volume and speed of vehicles, vary over time and across urban areas and can substantially affect vehicle emissions of air pollutants. Time-resolved activity at the street scale typically is derived using temporal allocation factors (TAFs) that allow the development of emissions inventories needed to predict concentrations of traffic-related air pollutants. This study examines the spatial and temporal variation of TAFs, and characterizes prediction errors resulting from their use. Methods are presented to estimate TAFs and their spatial and temporal variability and used to analyze total, commercial and non-commercial traffic in the Detroit, Michigan, U.S. metropolitan area. The variability of total volume estimates, quantified by the coefficient of variation (COV) representing the percentage departure from expected hourly volume, was 21, 33, 24 and 33% for weekdays, Saturdays, Sundays and holidays, respectively. Prediction errors mostly resulted from hour-to-hour variability on weekdays and Saturdays, and from day-to-day variability on Sundays and holidays. Spatial variability was limited across the study roads, most of which were large freeways. Commercial traffic had different temporal patterns and greater variability than noncommercial vehicle traffic, e.g., the weekday variability of hourly commercial volume was 28%. The results indicate that TAFs for a metropolitan region can provide reasonably accurate estimates of hourly vehicle volume on major roads. While vehicle volume is only one of many factors that govern on-road emission rates, air quality analyses would be strengthened by incorporating information regarding the uncertainty and variability of traffic activity. PMID:26688671
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Temperature, Pressure, and Infrared Image Survey of an Axisymmetric Heated Exhaust Plume
NASA Technical Reports Server (NTRS)
Nelson, Edward L.; Mahan, J. Robert; Birckelbaw, Larry D.; Turk, Jeffrey A.; Wardwell, Douglas A.; Hange, Craig E.
1996-01-01
The focus of this research is to numerically predict an infrared image of a jet engine exhaust plume, given field variables such as temperature, pressure, and exhaust plume constituents as a function of spatial position within the plume, and to compare this predicted image directly with measured data. This work is motivated by the need to validate computational fluid dynamic (CFD) codes through infrared imaging. The technique of reducing the three-dimensional field variable domain to a two-dimensional infrared image invokes the use of an inverse Monte Carlo ray trace algorithm and an infrared band model for exhaust gases. This report describes an experiment in which the above-mentioned field variables were carefully measured. Results from this experiment, namely tables of measured temperature and pressure data, as well as measured infrared images, are given. The inverse Monte Carlo ray trace technique is described. Finally, experimentally obtained infrared images are directly compared to infrared images predicted from the measured field variables.
Using the Quantile Mapping to improve a weather generator
NASA Astrophysics Data System (ADS)
Chen, Y.; Themessl, M.; Gobiet, A.
2012-04-01
We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.
Phonologically Driven Variability: The Case of Determiners
ERIC Educational Resources Information Center
Bürki, Audrey; Laganaro, Marina; Alario, F.-Xavier
2014-01-01
Speakers usually produce words in connected speech. In such contexts, the form in which many words are uttered is influenced by the phonological properties of neighboring words. The current article examines the representations and processes underlying the production of phonologically constrained word form variations. For this purpose, we consider…
Global evaluation of ammonia bidirectional exchange and livestock diurnal variation schemes
Bidirectional air–surface exchange of ammonia (NH3) has been neglected in many air quality models. In this study, we implement the bidirectional exchange of NH3 in the GEOS-Chem global chemical transport model. We also introduce an updated diurnal variability scheme for NH3...
The Impacts of Climate Variations on Military Operations in the Horn of Africa
2006-03-01
variability in a region. Climate forecasts are predictions of the future state of the climate , much as we think of weather forecasts but at longer...arrive at accurate characterizations of the future state of the climate . Many of the civilian organizations that generate reanalysis data also
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2017-11-01
Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Superfluid-insulator transitions of two-species bosons in an optical lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isacsson, A.; Department of Physics, Yale University, P.O. Box 208120, New Haven, Connecticut 06520-8120; Cha, M.-C.
2005-11-01
We consider the two-species bosonic Hubbard model with variable interspecies interaction and hopping strength in the grand canonical ensemble with a common chemical potential. We analyze the superfluid-insulator (SI) transition for the relevant parameter regimes and compute the ground state phase diagram in the vicinity of odd filling Mott states. We find that the superfluid-insulator transition occurs with (a) simultaneous onset of superfluidity of both species or (b) coexistence of Mott insulating state of one species and superfluidity of the other or, in the case of unit filling (c) complete depopulation of one species. The superfluid-insulator transition can be firstmore » order in a large region of the phase diagram. We develop a variational mean-field method which takes into account the effect of second order quantum fluctuations on the superfluid-insulator transition and corroborate the mean-field phase diagram using a quantum Monte Carlo study.« less
A method of online quantitative interpretation of diffuse reflection profiles of biological tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2013-02-01
We have developed a method of combined interpretation of spectral and spatial characteristics of diffuse reflection of biological tissues, which makes it possible to determine biophysical parameters of the tissue with a high accuracy in real time under conditions of their general variability. Using the Monte Carlo method, we have modeled a statistical ensemble of profiles of diffuse reflection coefficients of skin, which corresponds to a wave variation of its biophysical parameters. On its basis, we have estimated the retrieval accuracy of biophysical parameters using the developed method and investigated the stability of the method to errors of optical measurements. We have showed that it is possible to determine online the concentrations of melanin, hemoglobin, bilirubin, oxygen saturation of blood, and structural parameters of skin from measurements of its diffuse reflection in the spectral range 450-800 nm at three distances between the radiation source and detector.
A model for simulating random atmospheres as a function of latitude, season, and time
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1977-01-01
An empirical stochastic computer model was developed with the capability of generating random thermodynamic profiles of the atmosphere below an altitude of 99 km which are characteristic of any given season, latitude, and time of day. Samples of temperature, density, and pressure profiles generated by the model are statistically similar to measured profiles in a data base of over 6000 rocket and high-altitude atmospheric soundings; that is, means and standard deviations of modeled profiles and their vertical gradients are in close agreement with data. Model-generated samples can be used for Monte Carlo simulations of aircraft or spacecraft trajectories to predict or account for the effects on a vehicle's performance of atmospheric variability. Other potential uses for the model are in simulating pollutant dispersion patterns, variations in sound propagation, and other phenomena which are dependent on atmospheric properties, and in developing data-reduction software for satellite monitoring systems.
Density, Velocity and Ionization Structure in Accretion-Disc Winds
NASA Technical Reports Server (NTRS)
Sonneborn, George (Technical Monitor); Long, Knox
2004-01-01
This was a project to exploit the unique capabilities of FUSE to monitor variations in the wind- formed spectral lines of the luminous, low-inclination, cataclysmic variables(CV) -- RW Sex. (The original proposal contained two additional objects but these were not approved.) These observations were intended to allow us to determine the relative roles of density and ionization state changes in the outflow and to search for spectroscopic signatures of stochastic small-scale structure and shocked gas. By monitoring the temporal behavior of blue-ward extended absorption lines with a wide range of ionization potentials and excitation energies, we proposed to track the changing physical conditions in the outflow. We planned to use a new Monte Carlo code to calculate the ionization structure of and radiative transfer through the CV wind. The analysis therefore was intended to establish the wind geometry, kinematics and ionization state, both in a time-averaged sense and as a function of time.
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Tissue oxygenation and haemodynamics measurement with spatially resolved NIRS
NASA Astrophysics Data System (ADS)
Zhang, Y.; Scopesi, F.; Serra, G.; Sun, J. W.; Rolfe, P.
2010-08-01
We describe the use of Near Infrared Spectroscopy (NIRS) for the non-invasive investigation of changes in haemodynamics and oxygenation of human peripheral tissues. The goal was to measure spatial variations of tissue NIRS oxygenation variables, namely deoxy-haemoglobin (HHb), oxy-haemoglobin (HbO2), total haemoglobin (HbT), and thereby to evaluate the responses of the peripheral circulation to imposed physiological challenges. We present a skinfat- muscle heterogeneous tissue model with varying fat thickness up to 15mm and a Monte Carlo simulation of photon transport within this model. The mean partial path length and the mean photon visit depth in the muscle layer were derived for different source-detector spacing. We constructed NIRS instrumentation comprising of light-emitting diodes (LED) as light sources at four wavelengths, 735nm, 760nm, 810nm and 850nm and sensitive photodiodes (PD) as the detectors. Source-detector spacing was varied to perform measurements at different depths within forearm tissue. Changes in chromophore concentration in response to venous and arterial occlusion were calculated using the modified Lambert-Beer Law. Studies in fat and thin volunteers indicated greater sensitivity in the thinner subjects for the tissue oxygenation measurement in the muscle layer. These results were consistent with those found using Monte Carlo simulation. Overall, the results of this investigation demonstrate the usefulness of the NIRS instrument for deriving spatial information from biological tissues.
Joya, Daniel Chirivi
2017-04-18
We present the description of Phrynus calypso sp. nov. from Trinidad and Tobago, and Venezuela This species is very similar to Phrynus pulchripes (Pocock), however after examining Colombian specimens of P. pulchripes (ca. type locality), many differences were found. Characters commonly used in diagnosis of Phrynus species are variable and make identification difficult. Differences in a few structures, like pedipalpal spines, could not be enough to provide a useful diagnosis. It is necessary to account for variation of similar species in conjunction, and select non overlapping groups of characters. Observations in the variation in both species are presented, pointing out sources of confusion, and suggesting alternative characters to support diagnoses. At the moment, details about variation in many species in Phrynus, like that of P. pulchripes, are poorly known, and for this reason a redescription is provided.
Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.
Honkela, Antti; Valpola, Harri
2004-07-01
The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Thomas I.; Chaudhary, Pankaj; Michaelidesová, Anna
2016-05-01
Purpose: To investigate the clinical implications of a variable relative biological effectiveness (RBE) on proton dose fractionation. Using acute exposures, the current clinical adoption of a generic, constant cell killing RBE has been shown to underestimate the effect of the sharp increase in linear energy transfer (LET) in the distal regions of the spread-out Bragg peak (SOBP). However, experimental data for the impact of dose fractionation in such scenarios are still limited. Methods and Materials: Human fibroblasts (AG01522) at 4 key depth positions on a clinical SOBP of maximum energy 219.65 MeV were subjected to various fractionation regimens with an interfractionmore » period of 24 hours at Proton Therapy Center in Prague, Czech Republic. Cell killing RBE variations were measured using standard clonogenic assays and were further validated using Monte Carlo simulations and parameterized using a linear quadratic formalism. Results: Significant variations in the cell killing RBE for fractionated exposures along the proton dose profile were observed. RBE increased sharply toward the distal position, corresponding to a reduction in cell sparing effectiveness of fractionated proton exposures at higher LET. The effect was more pronounced at smaller doses per fraction. Experimental survival fractions were adequately predicted using a linear quadratic formalism assuming full repair between fractions. Data were also used to validate a parameterized variable RBE model based on linear α parameter response with LET that showed considerable deviations from clinically predicted isoeffective fractionation regimens. Conclusions: The RBE-weighted absorbed dose calculated using the clinically adopted generic RBE of 1.1 significantly underestimates the biological effective dose from variable RBE, particularly in fractionation regimens with low doses per fraction. Coupled with an increase in effective range in fractionated exposures, our study provides an RBE dataset that can be used by the modeling community for the optimization of fractionated proton therapy.« less
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuong, A; Chow, J
Purpose: The aim of this study is to investigate the variation of bone dose on photon beam energy (keV – MeV) in small-animal irradiation. Dosimetry of homogeneous and inhomogeneous phantoms as per the same mouse computed tomography image set were calculated using the DOSCTP and DOSXYZnrc based on the EGSnrc Monte Carlo code. Methods: Monte Carlo simulations for the homogeneous and inhomogeneous mouse phantom irradiated by a 360 degree photon arc were carried out. Mean doses of the bone tissue in the irradiated volumes were calculated at various photon beam energies, ranging from 50 keV to 1.25 MeV. The effectmore » of bone inhomogeneity was examined through the Inhomogeneous Correction Factor (ICF), a dose ratio of the inhomogeneous to the homogeneous medium. Results: From our Monte Carlo results, higher mean bone dose and ICF were found when using kilovoltage photon beams compared to megavoltage. In beam energies ranging from 50 keV to 200 keV, the bone dose was found maximum at 50 keV, and decreased significantly from 2.6 Gy to 0.55 Gy, when 2 Gy was delivered at the center of the phantom (isocenter). Similarly, the ICF were found decreasing from 4.5 to 1 when the photon beam energy was increased from 50 keV to 200 keV. Both mean bone dose and ICF remained at about 0.5 Gy and 1 from 200 keV to 1.25 MeV with insignificant variation, respectively. Conclusion: It is concluded that to avoid high bone dose in the small-animal irradiation, photon beam energy higher than 200 keV should be used with the ICF close to one, and bone dose comparable to the megavoltage beam where photoelectric effect is not dominant.« less
Sierra, C.A.; Loescher, H.W.; Harmon, M.E.; Richardson, A.D.; Hollinger, D.Y.; Perakis, S.S.
2009-01-01
Interannual variation of carbon fluxes can be attributed to a number of biotic and abiotic controls that operate at different spatial and temporal scales. Type and frequency of disturbance, forest dynamics, and climate regimes are important sources of variability. Assessing the variability of carbon fluxes from these specific sources can enhance the interpretation of past and current observations. Being able to separate the variability caused by forest dynamics from that induced by climate will also give us the ability to determine if the current observed carbon fluxes are within an expected range or whether the ecosystem is undergoing unexpected change. Sources of interannual variation in ecosystem carbon fluxes from three evergreen ecosystems, a tropical, a temperate coniferous, and a boreal forest, were explored using the simulation model STANDCARB. We identified key processes that introduced variation in annual fluxes, but their relative importance differed among the ecosystems studied. In the tropical site, intrinsic forest dynamics contributed ?? 30% of the total variation in annual carbon fluxes. In the temperate and boreal sites, where many forest processes occur over longer temporal scales than those at the tropical site, climate controlled more of the variation among annual fluxes. These results suggest that climate-related variability affects the rates of carbon exchange differently among sites. Simulations in which temperature, precipitation, and radiation varied from year to year (based on historical records of climate variation) had less net carbon stores than simulations in which these variables were held constant (based on historical records of monthly average climate), a result caused by the functional relationship between temperature and respiration. This suggests that, under a more variable temperature regime, large respiratory pulses may become more frequent and high enough to cause a reduction in ecosystem carbon stores. Our results also show that the variation of annual carbon fluxes poses an important challenge in our ability to determine whether an ecosystem is a source, a sink, or is neutral in regard to CO2 at longer timescales. In simulations where climate change negatively affected ecosystem carbon stores, there was a 20% chance of committing Type II error, even with 20 years of sequential data. ?? 2009 by the Ecological Society of America.
Norris, Peter M; da Silva, Arlindo M
2016-07-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
NASA Technical Reports Server (NTRS)
Norris, Peter M.; Da Silva, Arlindo M.
2016-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
Norris, Peter M.; da Silva, Arlindo M.
2018-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
NASA Astrophysics Data System (ADS)
Yang, C.; Zhang, Y. K.; Liang, X.
2014-12-01
Damping effect of an unsaturated-saturated system on tempospatialvariations of pressurehead and specificflux was investigated. The variance and covariance of both pressure head and specific flux in such a system due to a white noise infiltration were obtained by solving the moment equations of water flow in the system and verified with Monte Carlo simulations. It was found that both the pressure head and specific flux in this case are temporally non-stationary. The variance is zero at early time due to a deterministic initial condition used, then increases with time, and approaches anasymptotic limit at late time.Both pressure head and specific flux arealso non-stationary in space since the variance decreases from source to sink. The unsaturated-saturated systembehavesasa noise filterand it damps both the pressure head and specific flux, i.e., reduces their variations and enhances their correlation. The effect is stronger in upper unsaturated zone than in lower unsaturated zone and saturated zone. As a noise filter, the unsaturated-saturated system is mainly a low pass filter, filtering out the high frequency components in the time series of hydrological variables. The damping effect is much stronger in the saturated zone than in the saturated zone.
Stochastic approach to the derivation of emission limits for wastewater treatment plants.
Stransky, D; Kabelkova, I; Bares, V
2009-01-01
Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.
Pharmacogenomics in pediatric rheumatology.
Becker, Mara L
2012-09-01
Despite major advancements in therapeutics, variability in drug response remains a challenge in both adults and children diagnosed with rheumatic disease. The genetic contribution to interindividual variability has emerged as a promising avenue of exploration; however, challenges remain in making this knowledge relevant in the clinical realm. New genetic associations in patients with rheumatic disease have been reported for disease modifying antirheumatic drugs, antimetabolites and biologic drugs. However, many of these findings are in need of replication, and few have taken into account the concept of ontogeny, specific to pediatrics. In the current era in which we practice, genetic variation will undoubtedly contribute to variability in therapeutic response and may be a factor that will ultimately impact individualized care. However, preliminary studies have shown that there are many hurdles that need to be overcome as we explore pharmacogenomic associations specifically in the field of pediatric rheumatology.
NASA Astrophysics Data System (ADS)
Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin
2017-07-01
The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.
NASA Astrophysics Data System (ADS)
Najafi, E.; Devineni, N.; Pal, I.; Khanbilvardi, R.
2017-12-01
An understanding of the climate factors that influence the space-time variability of crop yields is important for food security purposes and can help us predict global food availability. In this study, we address how the crop yield trends of countries globally were related to each other during the last several decades and the main climatic variables that triggered high/low crop yields simultaneously across the world. Robust Principal Component Analysis (rPCA) is used to identify the primary modes of variation in wheat, maize, sorghum, rice, soybeans, and barley yields. Relations between these modes of variability and important climatic variables, especially anomalous sea surface temperature (SSTa), are examined from 1964 to 2010. rPCA is also used to identify simultaneous outliers in each year, i.e. systematic high/low crop yields across the globe. The results demonstrated spatiotemporal patterns of these crop yields and the climate-related events that caused them as well as the connection of outliers with weather extremes. We find that among climatic variables, SST has had the most impact on creating simultaneous crop yields variability and yield outliers in many countries. An understanding of this phenomenon can benefit global crop trade networks.
Fermions in Two Dimensions: Scattering and Many-Body Properties
Galea, Alexander; Zielinski, Tash; Gandolfi, Stefano; ...
2017-08-10
Ultracold atomic Fermi gases in two dimensions (2D) are an increasingly popular topic of research. The interaction strength between spin-up and spin-down particles in two-component Fermi gases can be tuned in experiments, allowing for a strongly interacting regime where the gas properties are yet to be fully understood. We have probed this regime for 2D Fermi gases by performing T = 0 ab initio diffusion Monte Carlo calculations. The many-body dynamics are largely dependent on the two-body interactions; therefore, we start with an in-depth look at scattering theory in 2D. We show the partial-wave expansion and its relation to themore » scattering length and effective range. Then, we discuss our numerical methods for determining these scattering parameters. Here, we close out this discussion by illustrating the details of bound states in 2D. Transitioning to the many-body system, we also use variationally optimized wave functions to calculate ground-state properties of the gas over a range of interaction strengths. We show results for the energy per particle and parametrize an equation of state. We then proceed to determine the chemical potential for the strongly interacting gas.« less
Fermions in Two Dimensions: Scattering and Many-Body Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galea, Alexander; Zielinski, Tash; Gandolfi, Stefano
Ultracold atomic Fermi gases in two dimensions (2D) are an increasingly popular topic of research. The interaction strength between spin-up and spin-down particles in two-component Fermi gases can be tuned in experiments, allowing for a strongly interacting regime where the gas properties are yet to be fully understood. We have probed this regime for 2D Fermi gases by performing T = 0 ab initio diffusion Monte Carlo calculations. The many-body dynamics are largely dependent on the two-body interactions; therefore, we start with an in-depth look at scattering theory in 2D. We show the partial-wave expansion and its relation to themore » scattering length and effective range. Then, we discuss our numerical methods for determining these scattering parameters. Here, we close out this discussion by illustrating the details of bound states in 2D. Transitioning to the many-body system, we also use variationally optimized wave functions to calculate ground-state properties of the gas over a range of interaction strengths. We show results for the energy per particle and parametrize an equation of state. We then proceed to determine the chemical potential for the strongly interacting gas.« less
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
NASA Astrophysics Data System (ADS)
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
Monte Carlo generators for studies of the 3D structure of the nucleon
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
A reliability analysis framework with Monte Carlo simulation for weld structure of crane's beam
NASA Astrophysics Data System (ADS)
Wang, Kefei; Xu, Hongwei; Qu, Fuzheng; Wang, Xin; Shi, Yanjun
2018-04-01
The reliability of the crane product in engineering is the core competitiveness of the product. This paper used Monte Carlo method analyzed the reliability of the weld metal structure of the bridge crane whose limit state function is mathematical expression. Then we obtained the minimum reliable welding feet height value for the welds between cover plate and web plate on main beam in different coefficients of variation. This paper provides a new idea and reference for the growth of the inherent reliability of crane.
Monte Carlo simulation of edge placement error
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Estrella, Joel; Enomoto, Masashi
2018-03-01
In the discussion of edge placement error (EPE), we proposed interactive pattern fidelity error (IPFE) as an indicator to judge pass/fail of integrated patterns. IPFE consists of lower and upper layer EPEs (CD and center of gravity: COG) and overlay, which is decided from the combination of each maximum variation. We succeeded in obtaining the IPFE density function by Monte Carlo simulation. In the results, we also found that the standard deviation (σ) of each indicator should be controlled by 4.0σ, at the semiconductor grade, such as 100 billion patterns per die. Moreover, CD, COG and overlay were analyzed by analysis of variance (ANOVA); we can discuss all variations from wafer to wafer (WTW), pattern to pattern (PTP), line edge roughness (LWR) and stochastic pattern noise (SPN) on an equal footing. From the analysis results, we can determine that these variations belong to which process and tools. Furthermore, measurement length of LWR is also discussed in ANOVA. We propose that the measurement length for IPFE analysis should not be decided to the micro meter order, such as >2 μm length, but for which device is actually desired.
NASA Astrophysics Data System (ADS)
Curotto, E.
2015-12-01
Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li+(CH3NO2)n (n = 1-20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first "magic number" is identified using the adiabatic solvent dissociation energy, and it marks the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.
NASA Astrophysics Data System (ADS)
Palit, S.; Basak, T.; Mondal, S. K.; Pal, S.; Chakrabarti, S. K.
2013-03-01
X-ray photons emitted during solar flares cause ionization in the lower ionosphere (~ 60 to 100 km) in excess of what is expected from a quiet sun. Very Low Frequency (VLF) radio wave signals reflected from the D region are affected by this excess ionization. In this paper, we reproduce the deviation in VLF signal strength during solar flares by numerical modeling. We use GEANT4 Monte Carlo simulation code to compute the rate of ionization due to a M-class and a X-class flare. The output of the simulation is then used in a simplified ionospheric chemistry model to calculate the time variation of electron density at different altitudes in the lower ionosphere. The resulting electron density variation profile is then self-consistently used in the LWPC code to obtain the time variation of the VLF signal change. We did the modeling of the VLF signal along the NWC (Australia) to IERC/ICSP (India) propagation path and compared the results with observations. The agreement is found to be very satisfactory.
Analysis of Specular Reflections Off Geostationary Satellites
NASA Astrophysics Data System (ADS)
Jolley, A.
2016-09-01
Many photometric studies of artificial satellites have attempted to define procedures that minimise the size of datasets required to infer information about satellites. However, it is unclear whether deliberately limiting the size of datasets significantly reduces the potential for information to be derived from them. In 2013 an experiment was conducted using a 14 inch Celestron CG-14 telescope to gain multiple night-long, high temporal resolution datasets of six geostationary satellites [1]. This experiment produced evidence of complex variations in the spectral energy distribution (SED) of reflections off satellite surface materials, particularly during specular reflections. Importantly, specific features relating to the SED variations could only be detected with high temporal resolution data. An update is provided regarding the nature of SED and colour variations during specular reflections, including how some of the variables involved contribute to these variations. Results show that care must be taken when comparing observed spectra to a spectral library for the purpose of material identification; a spectral library that uses wavelength as the only variable will be unable to capture changes that occur to a material's reflected spectra with changing illumination and observation geometry. Conversely, colour variations with changing illumination and observation geometry might provide an alternative means of determining material types.
CGRO Guest Investigator Program
NASA Technical Reports Server (NTRS)
Begelman, Mitchell C.
1997-01-01
The following are highlights from the research supported by this grant: (1) Theory of gamma-ray blazars: We studied the theory of gamma-ray blazars, being among the first investigators to propose that the GeV emission arises from Comptonization of diffuse radiation surrounding the jet, rather than from the synchrotron-self-Compton mechanism. In related work, we uncovered possible connections between the mechanisms of gamma-ray blazars and those of intraday radio variability, and have conducted a general study of the role of Compton radiation drag on the dynamics of relativistic jets. (2) A Nonlinear Monte Carlo code for gamma-ray spectrum formation: We developed, tested, and applied the first Nonlinear Monte Carlo (NLMC) code for simulating gamma-ray production and transfer under much more general (and realistic) conditions than are accessible with other techniques. The present version of the code is designed to simulate conditions thought to be present in active galactic nuclei and certain types of X-ray binaries, and includes the physics needed to model thermal and nonthermal electron-positron pair cascades. Unlike traditional Monte-Carlo techniques, our method can accurately handle highly non-linear systems in which the radiation and particle backgrounds must be determined self-consistently and in which the particle energies span many orders of magnitude. Unlike models based on kinetic equations, our code can handle arbitrary source geometries and relativistic kinematic effects In its first important application following testing, we showed that popular semi-analytic accretion disk corona models for Seyfert spectra are seriously in error, and demonstrated how the spectra can be simulated if the disk is sparsely covered by localized 'flares'.
Diurnal lighting patterns and habitat alter opsin expression and colour preferences in a killifish
Johnson, Ashley M.; Stanis, Shannon; Fuller, Rebecca C.
2013-01-01
Spatial variation in lighting environments frequently leads to population variation in colour patterns, colour preferences and visual systems. Yet lighting conditions also vary diurnally, and many aspects of visual systems and behaviour vary over this time scale. Here, we use the bluefin killifish (Lucania goodei) to compare how diurnal variation and habitat variation (clear versus tannin-stained water) affect opsin expression and the preference to peck at different-coloured objects. Opsin expression was generally lowest at midnight and dawn, and highest at midday and dusk, and this diurnal variation was many times greater than variation between habitats. Pecking preference was affected by both diurnal and habitat variation but did not correlate with opsin expression. Rather, pecking preference matched lighting conditions, with higher preferences for blue at noon and for red at dawn/dusk, when these wavelengths are comparatively scarce. Similarly, blue pecking preference was higher in tannin-stained water where blue wavelengths are reduced. In conclusion, L. goodei exhibits strong diurnal cycles of opsin expression, but these are not tightly correlated with light intensity or colour. Temporally variable pecking preferences probably result from lighting environment rather than from opsin production. These results may have implications for the colour pattern diversity observed in these fish. PMID:23698009
Stochastic evaluation of second-order many-body perturbation energies.
Willow, Soohaeng Yoo; Kim, Kwang S; Hirata, So
2012-11-28
With the aid of the Laplace transform, the canonical expression of the second-order many-body perturbation correction to an electronic energy is converted into the sum of two 13-dimensional integrals, the 12-dimensional parts of which are evaluated by Monte Carlo integration. Weight functions are identified that are analytically normalizable, are finite and non-negative everywhere, and share the same singularities as the integrands. They thus generate appropriate distributions of four-electron walkers via the Metropolis algorithm, yielding correlation energies of small molecules within a few mE(h) of the correct values after 10(8) Monte Carlo steps. This algorithm does away with the integral transformation as the hotspot of the usual algorithms, has a far superior size dependence of cost, does not suffer from the sign problem of some quantum Monte Carlo methods, and potentially easily parallelizable and extensible to other more complex electron-correlation theories.
Fluctuations in food supply drive recruitment variation in a marine fish.
Okamoto, Daniel K; Schmitt, Russell J; Holbrook, Sally J; Reed, Daniel C
2012-11-22
Reproductive rates and survival of young in animal populations figure centrally in generating management and conservation strategies. Model systems suggest that food supply can drive these often highly variable properties, yet for many wild species, quantifying such effects and assessing their implications have been challenging. We used spatially explicit time series of a well-studied marine reef fish (black surfperch Embiotoca jacksoni) and its known prey resources to evaluate the extent to which fluctuations in food supply influenced production of young by adults and survival of young to subadulthood. Our analyses reveal: (i) variable food available to both adults and to their offspring directly produced an order of magnitude variation in the number of young-of-year (YOY) produced per adult and (ii) food available to YOY produced a similar magnitude of variation in their subsequent survival. We also show that such large natural variation in vital rates can significantly alter decision thresholds (biological reference points) important for precautionary management. These findings reveal how knowledge of food resources can improve understanding of population dynamics and reduce risk of overharvest by more accurately identifying periods of low recruitment.
ERIC Educational Resources Information Center
Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.
2012-01-01
A Monte Carlo simulation was conducted to investigate the robustness of 4 latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of nonnormality of the observed…
The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.
ERIC Educational Resources Information Center
Rich, Joseph R.; Boudreau, John W.
1987-01-01
Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…
Calculating Potential Energy Curves with Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2014-06-01
Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).
Bürgin, Toni; Furrer, Heinz; Stockar, Rudolf
2016-01-01
The new neopterygian genus Ticinolepis, including two new species T. longaeva and T. crassidens is described from Middle Triassic carbonate platform deposits of the Monte San Giorgio. The anatomy of this fish shows a mosaic of halecomorph and ginglymodian characters and, thus, the new taxon probably represents a basal holostean. During the latest Anisian to earliest Ladinian the two new species coexisted in the intraplatform basin represented by the uppermost Besano Formation, but only T. longaeva sp. nov. inhabited the more restricted basin represented by the Ladinian Meride Limestone (except for the Kalkschieferzone). The more widely distributed type species shows interesting patterns of intraspecific variation including ontogenetic changes and morphological variation over time. The second species presents anatomical features that strongly indicate a strictly durophagous diet. The different distribution of the species is interpreted as a result of habitat partitioning and different adaptability to palaeoenvironmental changes. PMID:27547543
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
A general diagnostic model applied to language testing data.
von Davier, Matthias
2008-11-01
Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.
Bayesian Hierarchical Random Intercept Model Based on Three Parameter Gamma Distribution
NASA Astrophysics Data System (ADS)
Wirawati, Ika; Iriawan, Nur; Irhamah
2017-06-01
Hierarchical data structures are common throughout many areas of research. Beforehand, the existence of this type of data was less noticed in the analysis. The appropriate statistical analysis to handle this type of data is the hierarchical linear model (HLM). This article will focus only on random intercept model (RIM), as a subclass of HLM. This model assumes that the intercept of models in the lowest level are varied among those models, and their slopes are fixed. The differences of intercepts were suspected affected by some variables in the upper level. These intercepts, therefore, are regressed against those upper level variables as predictors. The purpose of this paper would demonstrate a proven work of the proposed two level RIM of the modeling on per capita household expenditure in Maluku Utara, which has five characteristics in the first level and three characteristics of districts/cities in the second level. The per capita household expenditure data in the first level were captured by the three parameters Gamma distribution. The model, therefore, would be more complex due to interaction of many parameters for representing the hierarchical structure and distribution pattern of the data. To simplify the estimation processes of parameters, the computational Bayesian method couple with Markov Chain Monte Carlo (MCMC) algorithm and its Gibbs Sampling are employed.
Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme
2013-05-01
Many construction activities can put workers at risk of breathing silica containing dusts, and there is an important body of literature documenting exposure levels using a task-based strategy. In this study, statistical modeling was used to analyze a data set containing 1466 task-based, personal respirable crystalline silica (RCS) measurements gathered from 46 sources to estimate exposure levels during construction tasks and the effects of determinants of exposure. Monte-Carlo simulation was used to recreate individual exposures from summary parameters, and the statistical modeling involved multimodel inference with Tobit models containing combinations of the following exposure variables: sampling year, sampling duration, construction sector, project type, workspace, ventilation, and controls. Exposure levels by task were predicted based on the median reported duration by activity, the year 1998, absence of source control methods, and an equal distribution of the other determinants of exposure. The model containing all the variables explained 60% of the variability and was identified as the best approximating model. Of the 27 tasks contained in the data set, abrasive blasting, masonry chipping, scabbling concrete, tuck pointing, and tunnel boring had estimated geometric means above 0.1mg m(-3) based on the exposure scenario developed. Water-fed tools and local exhaust ventilation were associated with a reduction of 71 and 69% in exposure levels compared with no controls, respectively. The predictive model developed can be used to estimate RCS concentrations for many construction activities in a wide range of circumstances.
NASA Astrophysics Data System (ADS)
Peng, L.; Sheffield, J.; Li, D.
2015-12-01
Evapotranspiration (ET) is a key link between the availability of water resources and climate change and climate variability. Variability of ET has important environmental and socioeconomic implications for managing hydrological hazards, food and energy production. Although there have been many observational and modeling studies of ET, how ET has varied and the drivers of the variations at different temporal scales remain elusive. Much of the uncertainty comes from the atmospheric evaporative demand (AED), which is the combined effect of radiative and aerodynamic controls. The inconsistencies among modeled AED estimates and the limited observational data may originate from multiple sources including the limited time span and uncertainties in the data. To fully investigate and untangle the intertwined drivers of AED, we present a spectrum analysis to identify key controls of AED across multiple temporal scales. We use long-term records of observed pan evaporation for 1961-2006 from 317 weather stations across China and physically-based model estimates of potential evapotranspiration (PET). The model estimates are based on surface meteorology and radiation derived from reanalysis, satellite retrievals and station data. Our analyses show that temperature plays a dominant role in regulating variability of AED at the inter-annual scale. At the monthly and seasonal scales, the primary control of AED shifts from radiation in humid regions to humidity in dry regions. Unlike many studies focusing on the spatial pattern of ET drivers based on a traditional supply and demand framework, this study underlines the importance of temporal scales when discussing controls of ET variations.
Turbulent, Extreme Multi-zone Model for Simulating Flux and Polarization Variability in Blazars
NASA Astrophysics Data System (ADS)
Marscher, Alan P.
2014-01-01
The author presents a model for variability of the flux and polarization of blazars in which turbulent plasma flowing at a relativistic speed down a jet crosses a standing conical shock. The shock compresses the plasma and accelerates electrons to energies up to γmax >~ 104 times their rest-mass energy, with the value of γmax determined by the direction of the magnetic field relative to the shock front. The turbulence is approximated in a computer code as many cells, each with a uniform magnetic field whose direction is selected randomly. The density of high-energy electrons in the plasma changes randomly with time in a manner consistent with the power spectral density of flux variations derived from observations of blazars. The variations in flux and polarization are therefore caused by continuous noise processes rather than by singular events such as explosive injection of energy at the base of the jet. Sample simulations illustrate the behavior of flux and linear polarization versus time that such a model produces. The variations in γ-ray flux generated by the code are often, but not always, correlated with those at lower frequencies, and many of the flares are sharply peaked. The mean degree of polarization of synchrotron radiation is higher and its timescale of variability shorter toward higher frequencies, while the polarization electric vector sometimes randomly executes apparent rotations. The slope of the spectral energy distribution exhibits sharper breaks than can arise solely from energy losses. All of these results correspond to properties observed in blazars.
Daniel B. Fagre; David L. Peterson
2000-01-01
An integrated program of ecosystem modeling and extensive field studies at Glacier and Olympic National Parks has quantified many of the ecological processes affected by climatic variability and disturbance. Models have successfully estimated snow distribution, annual watershed discharge, and stream temperature variation based on seven years of monitoring. Various...
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Two proposed convergence criteria for Monte Carlo solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Pederson, S.P.; Booth, T.E.
1992-01-01
The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less
Population extinction under bursty reproduction in a time-modulated environment
NASA Astrophysics Data System (ADS)
Vilk, Ohad; Assaf, Michael
2018-06-01
In recent years nondemographic variability has been shown to greatly affect dynamics of stochastic populations. For example, nondemographic noise in the form of a bursty reproduction process with an a priori unknown burst size, or environmental variability in the form of time-varying reaction rates, have been separately found to dramatically impact the extinction risk of isolated populations. In this work we investigate the extinction risk of an isolated population under the combined influence of these two types of nondemographic variation. Using the so-called momentum-space Wentzel-Kramers-Brillouin (WKB) approach and accounting for the explicit time dependence in the reaction rates, we arrive at a set of time-dependent Hamilton equations. To this end, we evaluate the population's extinction risk by finding the instanton of the time-perturbed Hamiltonian numerically, whereas analytical expressions are presented in particular limits using various perturbation techniques. We focus on two classes of time-varying environments: periodically varying rates corresponding to seasonal effects and a sudden decrease in the birth rate corresponding to a catastrophe. All our theoretical results are tested against numerical Monte Carlo simulations with time-dependent rates and also against a numerical solution of the corresponding time-dependent Hamilton equations.
A Bayesian framework to estimate diversification rates and their variation through time and space
2011-01-01
Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae) and Lupinus (Fabaceae). In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling. PMID:22013891
Fernandez, José-Luis; Forder, Julien
2015-03-01
In many countries, public responsibility over the funding and provision of long-term care services is held at the local level. In such systems, long-term care provision is often characterised by significant local variability. Using a panel dataset of local authorities over the period 2002-2012, the paper investigates the underlying causes of variation in gross social care expenditure for older people in England. The analysis distinguishes between factors outside the direct control of policy makers, local preferences and local policy spillovers. The results indicate that local demand and supply factors, and to a much lesser extent local political preferences and spatial policy spillovers, explain a large majority of the observed variation in expenditure. Copyright © 2015 John Wiley & Sons, Ltd.
Sun, Wei; Huang, Guo H; Zeng, Guangming; Qin, Xiaosheng; Yu, Hui
2011-03-01
It is widely known that variation of the C/N ratio is dependent on many state variables during composting processes. This study attempted to develop a genetic algorithm aided stepwise cluster analysis (GASCA) method to describe the nonlinear relationships between the selected state variables and the C/N ratio in food waste composting. The experimental data from six bench-scale composting reactors were used to demonstrate the applicability of GASCA. Within the GASCA framework, GA searched optimal sets of both specified state variables and SCA's internal parameters; SCA established statistical nonlinear relationships between state variables and the C/N ratio; to avoid unnecessary and time-consuming calculation, a proxy table was introduced to save around 70% computational efforts. The obtained GASCA cluster trees had smaller sizes and higher prediction accuracy than the conventional SCA trees. Based on the optimal GASCA tree, the effects of the GA-selected state variables on the C/N ratio were ranged in a descending order as: NH₄+-N concentration>Moisture content>Ash Content>Mean Temperature>Mesophilic bacteria biomass. Such a rank implied that the variation of ammonium nitrogen concentration, the associated temperature and the moisture conditions, the total loss of both organic matters and available mineral constituents, and the mesophilic bacteria activity, were critical factors affecting the C/N ratio during the investigated food waste composting. This first application of GASCA to composting modelling indicated that more direct search algorithms could be coupled with SCA or other multivariate analysis methods to analyze complicated relationships during composting and many other environmental processes. Copyright © 2010 Elsevier B.V. All rights reserved.
Purchase, Craig F; Moreau, Darek T R
2012-01-01
Genetic variation for phenotypic plasticity is ubiquitous and important. However, the scale of such variation including the relative variability present in reaction norms among different hierarchies of biological organization (e.g., individuals, populations, and closely related species) is unknown. Complicating interpretation is a trade-off in environmental scale. As plasticity can only be inferred over the range of environments tested, experiments focusing on fine tuned responses to normal or benign conditions may miss cryptic phenotypic variation expressed under novel or stressful environments. Here, we sought to discern the presence and shape of plasticity in the performance of brown trout sperm as a function of optimal to extremely stressful river pH, and demarcate if the reaction norm varies among genotypes. Our overarching goal was to determine if deteriorating environmental quality increases expressed variation among individuals. A more applied aim was to ascertain whether maintaining sperm performance over a wide pH range could help explain how brown trout are able to invade diverse river systems when transplanted outside of their native range. Individuals differed in their reaction norms of phenotypic expression of an important trait in response to environmental change. Cryptic variation was revealed under stressful conditions, evidenced through increasing among-individual variability. Importantly, data on population averages masked this variability in plasticity. In addition, canalized reaction norms in sperm swimming velocities of many individuals over a very large range in water chemistry may help explain why brown trout are able to colonize a wide variety of habitats. PMID:23145341
Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian
2009-12-01
The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.
Rassen, Jeremy A.; Brookhart, M. Alan; Glynn, Robert J.; Mittleman, Murray A.; Schneeweiss, Sebastian
2010-01-01
The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of “exchangeability” between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects. PMID:19356901
NASA Astrophysics Data System (ADS)
Dupuy, Nicolas; Casula, Michele
2018-04-01
By means of the Jastrow correlated antisymmetrized geminal power (JAGP) wave function and quantum Monte Carlo (QMC) methods, we study the ground state properties of the oligoacene series, up to the nonacene. The JAGP is the accurate variational realization of the resonating-valence-bond (RVB) ansatz proposed by Pauling and Wheland to describe aromatic compounds. We show that the long-ranged RVB correlations built in the acenes' ground state are detrimental for the occurrence of open-shell diradical or polyradical instabilities, previously found by lower-level theories. We substantiate our outcome by a direct comparison with another wave function, tailored to be an open-shell singlet (OSS) for long-enough acenes. By comparing on the same footing the RVB and OSS wave functions, both optimized at a variational QMC level and further projected by the lattice regularized diffusion Monte Carlo method, we prove that the RVB wave function has always a lower variational energy and better nodes than the OSS, for all molecular species considered in this work. The entangled multi-reference RVB state acts against the electron edge localization implied by the OSS wave function and weakens the diradical tendency for higher oligoacenes. These properties are reflected by several descriptors, including wave function parameters, bond length alternation, aromatic indices, and spin-spin correlation functions. In this context, we propose a new aromatic index estimator suitable for geminal wave functions. For the largest acenes taken into account, the long-range decay of the charge-charge correlation functions is compatible with a quasi-metallic behavior.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables. PMID:26808311
Liao, Jiaqiang; Yu, Shicheng; Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008-2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse "V" shape and "V" shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables.
Optimal heavy tail estimation - Part 1: Order selection
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred; Bermejo, Miguel A.
2017-12-01
The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D; Jee, K; MacDonald, S
Purpose: Increased risk of coronary artery disease has been documented for patients treated with radiation for left-sided breast cancer. Proton therapy (PRT) has been shown to significantly decrease cardiac irradiation, however variations in relative biological effectiveness (RBE) have been ignored so far. In this study we evaluate the impact of accounting for RBE variations on sensitive structures located within high linear energy transfer (LET) areas (distal end) of the proton treatment fields, for this treatment site. Methods: Three patients treated in our institution with PRT for left-sided breast cancer were selected. All patients underwent reconstructive surgery after mastectomy and treatedmore » to a total dose of 50.4Gy with beam(s) vertical to the chest wall. Dose and LET distributions were calculated using Monte Carlo (MC-TOPAS - TOol for PArticle Simulation). The LET-based, variable-RBE-weighted dose was compared to the analytical calculation algorithm (ACA) and MC dose distributions for a constant RBE of 1.1, based on volume histograms and mean values for the target, heart and left anterior descending coronary artery (LAD). Results: Assuming a constant RBE and compared to the ACA dose, MC predicted lower mean target and heart doses by 0.5% to 2.7% of the prescription dose. For variable RBE, plan evaluation showed increased mean target dose by up to 5%. Mean variable-RBE-weighted doses for the LAD ranged from 2.7 to 5.9Gy(RBE) among patients increased by 41%–64.2% compared to constant RBE ACA calculation (absolute dose: 1.7–3.9Gy(RBE)). Smaller increase in mean heart doses was noticed. Conclusion: ACA overestimates the target mean dose by up to 2.7%. However, disregarding variations in RBE may lead to significant underestimation of the dose to sensitive structures at the distal end of the proton treatment field and could thus impact outcome modeling for cardiac toxicities after proton therapy. These results are subject to RBE model and parameter uncertainties.« less
Miller, John J; Eackles, Michael S.; Stauffer, Jay R; King, Timothy L.
2015-01-01
We characterized variation within the mitochondrial genomes of the invasive silver carp (Hypophthalmichthys molitrix) and bighead carp (H. nobilis) from the Mississippi River drainage by mapping our Next-Generation sequences to their publicly available genomes. Variant detection resulted in 338 single-nucleotide polymorphisms for H. molitrix and 39 for H. nobilis. The much greater genetic variation in H. molitrix mitochondria relative to H. nobilis may be indicative of a greater North American female effective population size of the former. When variation was quantified by gene, many tRNA loci appear to have little or no variability based on our results whereas protein-coding regions were more frequently polymorphic. These results provide biologists with additional regions of DNA to be used as markers to study the invasion dynamics of these species.
NASA Astrophysics Data System (ADS)
Marchesi, Claudio; Jolly, Wayne T.; Lewis, John F.; Garrido, Carlos J.; Proenza, Joaquín. A.; Lidiak, Edward G.
2010-05-01
The Monte del Estado massif is the largest and northernmost serpentinized peridotite belt in southwest Puerto Rico. It is mainly composed of spinel lherzolite and minor harzburgite with variable clinopyroxene modal abundances. Mineral and whole rock major and trace element compositions of peridotites coincide with those of fertile abyssal peridotites from mid ocean ridges. Peridotites lost 2-14 wt% of relative MgO and variable amounts of CaO by serpentinization and seafloor weathering. HREE contents in whole rock indicate that the Monte del Estado peridotites are residues after low to moderate degrees (2-15%) of fractional partial melting in the spinel stability field. However, very low LREE/HREE and MREE/HREE in clinopyroxene cannot be explained by melting models of a spinel lherzolite source and support that the Monte del Estado peridotites experienced initial low fractional melting degrees (~ 4%) in the garnet stability field. The relative enrichment of LREE in whole rock is not due to secondary processes but probably reflects the capture of percolating melt fractions along grain boundaries or as microinclusions in minerals, or the presence of exotic micro-phases in the mineral assemblage. We propose that the Monte del Estado peridotite belt represents a section of ancient Proto-Caribbean (Atlantic) lithospheric mantle originated by seafloor spreading between North and South America in the Late Jurassic-Early Cretaceous. This portion of oceanic lithospheric mantle was subsequently trapped in the forearc region of the Greater Antilles paleo-island arc generated by the northward subduction of the Caribbean plate beneath the Proto-Caribbean ocean. Finally, the Monte del Estado peridotites belt was emplaced in the Early Cretaceous probably as result of the change in subduction polarity of the Greater Antilles paleo-island arc without having been significantly modified by subduction processes.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
NASA Astrophysics Data System (ADS)
Hilburn, Guy Louis
Results from several studies are presented which detail explorations of the physical and spectral properties of low luminosity active galactic nuclei. An initial Sagittarius A* general relativistic magnetohydrodynamic simulation and Monte Carlo radiation transport model suggests accretion rate changes as the dominant flaring method. A similar study on M87 introduces new methods to the Monte Carlo model for increased consistency in highly energetic sources. Again, accretion rate variation seems most appropriate to explain spectral transients. To more closely resolve the methods of particle energization in active galactic nuclei accretion disks, a series of localized shearing box simulations explores the effect of numerical resolution on the development of current sheets. A particular focus on numerically describing converged current sheet formation will provide new methods for consideration of turbulence in accretion disks.
Effects of Missing Data Methods in SEM under Conditions of Incomplete and Nonnormal Data
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2017-01-01
Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…
A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen
2012-01-01
Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirlepesov, F.; Shin, J.; Moskvin, V. P.
Purpose: Dose weighted Linear Energy Transfer (LETd) analysis of critical structures may be useful in understanding the side effects of the proton therapy. The objective is to analyze the differences between LETd and dose distributions in brain tumor patients receiving double scattering proton therapy, to quantify LETd variation in critical organs, and to identify beam arrangements contributing to high LETd in critical organs. Methods: Monte Carlo simulations of 9 pediatric brain tumor patients were performed. The treatment plans were reconstructed with the TOPAS Monte Carlo code to calculate LETd and dose. The beam data were reconstructed proximal to the aperturemore » of the double scattering nozzle. The dose and LETd to target and critical organs including brain stem, optic chiasm, lens, optic nerve, pituitary gland, and hypothalamus were computed for each beam. Results: Greater variability in LETd compared to dose was observed in the brainstem for patients with a variety of tumor types including 5 patients with tumors located in the posterior fossa. Approximately 20%–44% brainstem volume received LETd of 5kev/µm or greater from beams within gantry angles 180°±30° for 5 patients treated with a 3 beam arrangement. Critical organs received higher LETd when located in the vicinity of the beam distal edge. Conclusion: This study presents a novel strategy in the evaluation of the proton treatment impact on critical organs. While the dose to critical organs is confined below the required limits, the LETd may have significant variation. Critical organs in the vicinity of beam distal edge receive higher LETd and depended on beam arrangement, e.g. in posterior fossa tumor treatment, brainstem receive higher LETd from posterior-anterior beams. This study shows importance of the LETd analysis of the radiation impact on the critical organs in proton therapy and may be used to explain clinical imaging observations after therapy.« less
A probabilistic model framework for evaluating year-to-year variation in crop productivity
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Iizumi, T.; Tao, F.
2008-12-01
Most models describing the relation between crop productivity and weather condition have so far been focused on mean changes of crop yield. For keeping stable food supply against abnormal weather as well as climate change, evaluating the year-to-year variations in crop productivity rather than the mean changes is more essential. We here propose a new framework of probabilistic model based on Bayesian inference and Monte Carlo simulation. As an example, we firstly introduce a model on paddy rice production in Japan. It is called PRYSBI (Process- based Regional rice Yield Simulator with Bayesian Inference; Iizumi et al., 2008). The model structure is the same as that of SIMRIW, which was developed and used widely in Japan. The model includes three sub- models describing phenological development, biomass accumulation and maturing of rice crop. These processes are formulated to include response nature of rice plant to weather condition. This model inherently was developed to predict rice growth and yield at plot paddy scale. We applied it to evaluate the large scale rice production with keeping the same model structure. Alternatively, we assumed the parameters as stochastic variables. In order to let the model catch up actual yield at larger scale, model parameters were determined based on agricultural statistical data of each prefecture of Japan together with weather data averaged over the region. The posterior probability distribution functions (PDFs) of parameters included in the model were obtained using Bayesian inference. The MCMC (Markov Chain Monte Carlo) algorithm was conducted to numerically solve the Bayesian theorem. For evaluating the year-to-year changes in rice growth/yield under this framework, we firstly iterate simulations with set of parameter values sampled from the estimated posterior PDF of each parameter and then take the ensemble mean weighted with the posterior PDFs. We will also present another example for maize productivity in China. The framework proposed here provides us information on uncertainties, possibilities and limitations on future improvements in crop model as well.
Valcke, Mathieu; Haddad, Sami
2015-01-01
The objective of this study was to compare the magnitude of interindividual variability in internal dose for inhalation exposure to single versus multiple chemicals. Physiologically based pharmacokinetic models for adults (AD), neonates (NEO), toddlers (TODD), and pregnant women (PW) were used to simulate inhalation exposure to "low" (RfC-like) or "high" (AEGL-like) air concentrations of benzene (Bz) or dichloromethane (DCM), along with various levels of toluene alone or toluene with ethylbenzene and xylene. Monte Carlo simulations were performed and distributions of relevant internal dose metrics of either Bz or DCM were computed. Area under the blood concentration of parent compound versus time curve (AUC)-based variability in AD, TODD, and PW rose for Bz when concomitant "low" exposure to mixtures of increasing complexities occurred (coefficient of variation (CV) = 16-24%, vs. 12-15% for Bz alone), but remained unchanged considering DCM. Conversely, AUC-based CV in NEO fell (15 to 5% for Bz; 12 to 6% for DCM). Comparable trends were observed considering production of metabolites (AMET), except for NEO's CYP2E1-mediated metabolites of Bz, where an increased CV was observed (20 to 71%). For "high" exposure scenarios, Cmax-based variability of Bz and DCM remained unchanged in AD and PW, but decreased in NEO (CV= 11-16% to 2-6%) and TODD (CV= 12-13% to 7-9%). Conversely, AMET-based variability for both substrates rose in every subpopulation. This study analyzed for the first time the impact of multiple exposures on interindividual variability in toxicokinetics. Evidence indicates that this impact depends upon chemical concentrations and biochemical properties, as well as the subpopulation and internal dose metrics considered.
On the Scatter in the Radius-Luminosity Relationship for Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Kilerci Eser, E.; Vestergaard, M.; Peterson, B. M.; Denney, K. D.; Bentz, M. C.
2015-03-01
We investigate and quantify the observed scatter in the empirical relationship between the broad line region size R and the luminosity of the active galactic nucleus, in order to better understand its origin. This study is motivated by the indispensable role of this relationship in the mass estimation of cosmologically distant black holes, but may also be relevant to the recently proposed application of this relationship for measuring cosmic distances. We study six nearby reverberation-mapped active galactic nuclei (AGNs) for which simultaneous UV and optical monitoring data exist. We also examine the long-term optical luminosity variations of the Seyfert 1 galaxy NGC 5548 and employ Monte Carlo simulations to study the effects of the intrinsic variability of individual objects on the scatter in the global relationship for a sample of ~40 AGNs. We find the scatter in this relationship has a correctable dependence on color. For individual AGNs, the size of the Hβ emitting region has a steeper dependence on the nuclear optical luminosity than on the UV luminosity, which can introduce a scatter of ~0.08 dex into the global relationship, due the nonlinear relationship between the variations in the ionizing continuum and those in the optical continuum. Also, our analysis highlights the importance of understanding and minimizing the scatter in the relationship traced by the intrinsic variability of individual AGNs since it propagates directly into the global relationship. We find that using the UV luminosity as a substitute for the ionizing luminosity can reduce a sizable fraction of the current observed scatter of ~0.13 dex.
Role of the North Atlantic Ocean in Low Frequency Climate Variability
NASA Astrophysics Data System (ADS)
Danabasoglu, G.; Yeager, S. G.; Kim, W. M.; Castruccio, F. S.
2017-12-01
The Atlantic Ocean is a unique basin with its extensive, North - South overturning circulation, referred to as the Atlantic meridional overturning circulation (AMOC). AMOC is thought to represent the dynamical memory of the climate system, playing an important role in decadal and longer time scale climate variability as well as prediction of the earth's future climate on these time scales via its large heat and salt transports. This oceanic memory is communicated to the atmosphere primarily through the influence of persistent sea surface temperature (SST) variations. Indeed, many modeling studies suggest that ocean circulation, i.e., AMOC, is largely responsible for the creation of coherent SST variability in the North Atlantic, referred to as Atlantic Multidecadal Variability (AMV). AMV has been linked to many (multi)decadal climate variations in, e.g., Sahel and Brazilian rainfall, Atlantic hurricane activity, and Arctic sea-ice extent. In the absence of long, continuous observations, much of the evidence for the ocean's role in (multi)decadal variability comes from model simulations. Although models tend to agree on the role of the North Atlantic Oscillation in creating the density anomalies that proceed the changes in ocean circulation, model fidelity in representing variability characteristics, mechanisms, and air-sea interactions remains a serious concern. In particular, there is increasing evidence that models significantly underestimate low frequency variability in the North Atlantic compared to available observations. Such model deficiencies can amplify the relative influence of external or stochastic atmospheric forcing in generating (multi)decadal variability, i.e., AMV, at the expense of ocean dynamics. Here, a succinct overview of the current understanding of the (North) Atlantic Ocean's role on the regional and global climate, including some outstanding questions, will be presented. In addition, a few examples of the climate impacts of the AMV via atmospheric teleconnections from a set of coupled simulations, also considering the relative roles of its tropical and extratropical components, will be highlighted.
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
NASA Technical Reports Server (NTRS)
Pinckney, John
2010-01-01
With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.
Effects of a large wildfire on vegetation structure in a variable fire mosaic.
Foster, C N; Barton, P S; Robinson, N M; MacGregor, C I; Lindenmayer, D B
2017-12-01
Management guidelines for many fire-prone ecosystems highlight the importance of maintaining a variable mosaic of fire histories for biodiversity conservation. Managers are encouraged to aim for fire mosaics that are temporally and spatially dynamic, include all successional states of vegetation, and also include variation in the underlying "invisible mosaic" of past fire frequencies, severities, and fire return intervals. However, establishing and maintaining variable mosaics in contemporary landscapes is subject to many challenges, one of which is deciding how the fire mosaic should be managed following the occurrence of large, unplanned wildfires. A key consideration for this decision is the extent to which the effects of previous fire history on vegetation and habitats persist after major wildfires, but this topic has rarely been investigated empirically. In this study, we tested to what extent a large wildfire interacted with previous fire history to affect the structure of forest, woodland, and heath vegetation in Booderee National Park in southeastern Australia. In 2003, a summer wildfire burned 49.5% of the park, increasing the extent of recently burned vegetation (<10 yr post-fire) to more than 72% of the park area. We tracked the recovery of vegetation structure for nine years following the wildfire and found that the strength and persistence of fire effects differed substantially between vegetation types. Vegetation structure was modified by wildfire in forest, woodland, and heath vegetation, but among-site variability in vegetation structure was reduced only by severe fire in woodland vegetation. There also were persistent legacy effects of the previous fire regime on some attributes of vegetation structure including forest ground and understorey cover, and woodland midstorey and overstorey cover. For example, woodland midstorey cover was greater on sites with higher fire frequency, irrespective of the severity of the 2003 wildfire. Our results show that even after a large, severe wildfire, underlying fire histories can contribute substantially to variation in vegetation structure. This highlights the importance of ensuring that efforts to reinstate variation in vegetation fire age after large wildfires do not inadvertently reduce variation in vegetation structure generated by the underlying invisible mosaic. © 2017 by the Ecological Society of America.
Mignon, C.; Tobin, D. J.; Zeitouny, M.; Uzunbajakava, N. E.
2018-01-01
Finding a path towards a more accurate prediction of light propagation in human skin remains an aspiration of biomedical scientists working on cutaneous applications both for diagnostic and therapeutic reasons. The objective of this study was to investigate variability of the optical properties of human skin compartments reported in literature, to explore the underlying rational of this variability and to propose a dataset of values, to better represent an in vivo case and recommend a solution towards a more accurate prediction of light propagation through cutaneous compartments. To achieve this, we undertook a novel, logical yet simple approach. We first reviewed scientific articles published between 1981 and 2013 that reported on skin optical properties, to reveal the spread in the reported quantitative values. We found variations of up to 100-fold. Then we extracted the most trust-worthy datasets guided by a rule that the spectral properties should reflect the specific biochemical composition of each of the skin layers. This resulted in the narrowing of the spread in the calculated photon densities to 6-fold. We conclude with a recommendation to use the identified most robust datasets when estimating light propagation in human skin using Monte Carlo simulations. Alternatively, otherwise follow our proposed strategy to screen any new datasets to determine their biological relevance. PMID:29552418
NASA Astrophysics Data System (ADS)
Liu, Meixian; Xu, Xianli; Sun, Alex
2015-07-01
Climate extremes can cause devastating damage to human society and ecosystems. Recent studies have drawn many conclusions about trends in climate extremes, but few have focused on quantitative analysis of their spatial variability and underlying mechanisms. By using the techniques of overlapping moving windows, the Mann-Kendall trend test, correlation, and stepwise regression, this study examined the spatial-temporal variation of precipitation extremes and investigated the potential key factors influencing this variation in southwestern (SW) China, a globally important biodiversity hot spot and climate-sensitive region. Results showed that the changing trends of precipitation extremes were not spatially uniform, but the spatial variability of these precipitation extremes decreased from 1959 to 2012. Further analysis found that atmospheric circulations rather than local factors (land cover, topographic conditions, etc.) were the main cause of such precipitation extremes. This study suggests that droughts or floods may become more homogenously widespread throughout SW China. Hence, region-wide assessments and coordination are needed to help mitigate the economic and ecological impacts.
Monte Carlo Study of Cosmic-Ray Propagation in the Galaxy and Diffuse Gamma-Ray Production
NASA Astrophysics Data System (ADS)
Huang, C.-Y.; Pohl, M.
This talk present preliminary results for the time-dependent cosmic-ray propagation in the Galaxy by a fully 3-dimensional Monte Carlo simulation. The distribution of cosmic-rays (both protons and helium nuclei) in the Galaxy is studied on various spatial scales for both constant and variable cosmic-ray sources. The continuous diffuse gamma-ray emission produced by cosmic-rays during the propagation is evaluated. The results will be compared with calculations made with other propagation models.
Pluriannual variability of sedimentation on mudflats in a macrotidal estuary
NASA Astrophysics Data System (ADS)
Cuvilliez, A.; Lafite, R.; Deloffre, J.; Massei, N.; Langlois, E.; Sakho, I.
2010-12-01
Antoine Cuvilliez1, Robert Lafite2, Julien Deloffre2, Nicolas Massei2, Estelle Langlois 3 and Issa Sakho2 1 Université du Havre, FRE 3102, Laboratoire d’ondes et milieux complexes, Université du Havre, 76058 Le Havre cedex, France 2 Université de Rouen, UMR 6143, Morphodynamique Continentale et Côtière, 76821 Mont Saint Aignan Cedex, France. 3 Université de Rouen, ECODIV , Etude et Compréhension de la Biodiversité, 76821 Mont Saint Aignan Cedex, France.
The Importance of Rotational Time-scales in Accretion Variability
NASA Astrophysics Data System (ADS)
Costigan, Gráinne; Vink, Joirck; Scholz, Aleks; Testi, Leonardo; Ray, Tom
2013-07-01
For the first few million years, one of the dominant sources of emission from a low mass young stellar object is from accretion. This process regulates the flow of material and angular moments from the surroundings to the central object, and is thought to play an important role in the definition of the long term stellar properties. Variability is a well documented attribute of accretion, and has been observed on time-scales of from days to years. However, where these variations come from is not clear. Th current model for accretion is magnetospheric accretion, where the stellar magnetic field truncates the disc, allowing the matter to flow from the disc onto the surface of the star. This model allows for variations in the accretion rate to come from many different sources, such as the magnetic field, the circumstellar disc and the interaction of the different parts of the system. We have been studying unbiased samples of accretors in order to identify the dominant time-scales and typical magnitudes of variations. In this way different sources of variations can be excluded and any missing physics in these systems identified. Through our previous work with the Long-term Accretion Monitoring Program (LAMP), we found 10 accretors in the ChaI region, whose variability is dominated by short term variations of 2 weeks. This was the shortest time period between spectroscopic observations which spanned 15 months, and rules out large scale processes in the disk as origins of this variability. On the basis of this study we have gone further to study the accretion signature H-alpha, over the time-scales of minutes and days in a set of Herbig Ae and T Tauri stars. Using the same methods as we used in LAMP we found the dominant time-scales of variations to be days. These samples both point towards rotation period of these objects as being an important time-scale for accretion variations. This allows us to indicate which are the most likely sources of these variations.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
On analyzing ordinal data when responses and covariates are both missing at random.
Rana, Subrata; Roy, Surupa; Das, Kalyan
2016-08-01
In many occasions, particularly in biomedical studies, data are unavailable for some responses and covariates. This leads to biased inference in the analysis when a substantial proportion of responses or a covariate or both are missing. Except a few situations, methods for missing data have earlier been considered either for missing response or for missing covariates, but comparatively little attention has been directed to account for both missing responses and missing covariates, which is partly attributable to complexity in modeling and computation. This seems to be important as the precise impact of substantial missing data depends on the association between two missing data processes as well. The real difficulty arises when the responses are ordinal by nature. We develop a joint model to take into account simultaneously the association between the ordinal response variable and covariates and also that between the missing data indicators. Such a complex model has been analyzed here by using the Markov chain Monte Carlo approach and also by the Monte Carlo relative likelihood approach. Their performance on estimating the model parameters in finite samples have been looked into. We illustrate the application of these two methods using data from an orthodontic study. Analysis of such data provides some interesting information on human habit. © The Author(s) 2013.
Switzer, P.; Harden, J.W.; Mark, R.K.
1988-01-01
A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.
Polynomial complexity despite the fermionic sign
NASA Astrophysics Data System (ADS)
Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.
2017-04-01
It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.
solving a puzzle every time you go to the river. There are so many different variables that can be different products like hydrogen. "I have to put all these pieces together as well-what sort of genetic elements and what sort of genes am I going to put in combination. I might try different variations and
Use of isoenzyme techniques in forest genetics research
M. Thompson Conkle; W. T. Adams
1977-01-01
Genetic variation among loblolly pine (Pinus taeda L.) samples from a natural stand and among clones in seed orchards was analyzed using simply inherited isozyme markers. Alleles for eleven enzyme loci were found useful for genotyping trees in a natural stand in North Carolina. The pines were highly variable with as many as seven alleles per isozyme...
Discriminant Analysis of Time Series in the Presence of Within-Group Spectral Variability.
Krafty, Robert T
2016-07-01
Many studies record replicated time series epochs from different groups with the goal of using frequency domain properties to discriminate between the groups. In many applications, there exists variation in cyclical patterns from time series in the same group. Although a number of frequency domain methods for the discriminant analysis of time series have been explored, there is a dearth of models and methods that account for within-group spectral variability. This article proposes a model for groups of time series in which transfer functions are modeled as stochastic variables that can account for both between-group and within-group differences in spectra that are identified from individual replicates. An ensuing discriminant analysis of stochastic cepstra under this model is developed to obtain parsimonious measures of relative power that optimally separate groups in the presence of within-group spectral variability. The approach possess favorable properties in classifying new observations and can be consistently estimated through a simple discriminant analysis of a finite number of estimated cepstral coefficients. Benefits in accounting for within-group spectral variability are empirically illustrated in a simulation study and through an analysis of gait variability.
Müller, Eike H.; Scheichl, Rob; Shardlow, Tony
2015-01-01
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075
Müller, Eike H; Scheichl, Rob; Shardlow, Tony
2015-04-08
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.
A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision
NASA Technical Reports Server (NTRS)
Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.
1998-01-01
We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation problems and provide a measure of model performance which can be used in attempts to improve such models.
The influence of landscape features on road development in a loess region, China.
Bi, Xiaoli; Wang, Hui; Zhou, Rui
2011-10-01
Many ecologists focus on the effects of roads on landscapes, yet few consider how landscapes affect road systems. In this study, therefore, we quantitatively evaluated how land cover, topography, and building density affected the length density, node density, spatial pattern, and location of roads in Dongzhi Yuan, a typical loess region in China. Landscape factors and roads were mapped using images from SPOT satellite (Système Probatoire d'Observation de la Terre), initiated by the French space agency and a digital elevation model (DEM). Detrended canonical correspondence analysis (DCCA), a useful ordination technique to explain species-environment relations in community ecology, was applied to evaluate the ways in which landscapes may influence roads. The results showed that both farmland area and building density were positively correlated with road variables, whereas gully density and the coefficient of variation (CV of DEM) showed negative correlations. The CV of DEM, farmland area, grassland area, and building density explained variation in node density, length density, and the spatial pattern of roads, whereas gully density and building density explained variation in variables representing road location. In addition, node density, rather than length density, was the primary road variable affected by landscape variables. The results showed that the DCCA was effective in explaining road-landscape relations. Understanding these relations can provide information for landscape managers and transportation planners.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Good News for Borehole Climatology
NASA Astrophysics Data System (ADS)
Rath, Volker; Fidel Gonzalez-Rouco, J.; Goosse, Hugues
2010-05-01
Though the investigation of observed borehole temperatures has proved to be a valuable tool for the reconstruction of ground surface temperature histories, there are many open questions concerning the significance and accuracy of the reconstructions from these data. In particular, the temperature signal of the warming after the Last glacial Maximum (LGM) is still present in borehole temperature profiles. It influences the relatively shallow boreholes used in current paleoclimate inversions to estimate temperature changes in the last centuries. This is shown using Monte Carlo experiments on past surface temperature change, using plausible distributions for the most important parameters, i.e.,amplitude and timing of the glacial-interglacial transition, the prior average temperature, and petrophysical properties. It has been argued that the signature of the last glacial-interglacial transition could be responsible for the high amplitudes of millennial temperature reconstructions. However, in shallow boreholes the additional effect of past climate can reasonably approximated by a linear variation of temperature with depth, and thus be accommodated by a "biased" background heat flow. This is good news for borehole climate, but implies that the geological heat flow values have to be interpreted accordingly. Borehole climate reconstructions from these shallow are most probably underestimating past variability due to the diffusive character of the heat conduction process, and the smoothness constraints necessary for obtaining stable solutions of this ill-posed inverse problem. A simple correction based on subtracting an appropriate prior surface temperature history shows promising results reducing these errors considerably, also with deeper boreholes, where the heat flow signal can not be approximated linearly, and improves the comparisons with AOGCM modeling results.
Estimating individual glomerular volume in the human kidney: clinical perspectives
Puelles, Victor G.; Zimanyi, Monika A.; Samuel, Terence; Hughson, Michael D.; Douglas-Denton, Rebecca N.; Bertram, John F.
2012-01-01
Background. Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. Methods. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin’s concordance coefficient (RC), coefficient of variation (CV) and coefficient of error (CE) measured reliability. Results. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (RC > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Conclusions. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution. PMID:21984554
An estimator for the standard deviation of a natural frequency. II.
NASA Technical Reports Server (NTRS)
Schiff, A. J.; Bogdanoff, J. L.
1971-01-01
A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.
Chemical accuracy from quantum Monte Carlo for the benzene dimer.
Azadi, Sam; Cohen, R E
2015-09-14
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.
Performance of quantum Monte Carlo for calculating molecular bond lengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au
2016-03-28
This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF.more » The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.« less
On the simulation of indistinguishable fermions in the many-body Wigner formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.
2015-01-01
The simulation of quantum systems consisting of interacting, indistinguishable fermions is an incredible mathematical problem which poses formidable numerical challenges. Many sophisticated methods addressing this problem are available which are based on the many-body Schrödinger formalism. Recently a Monte Carlo technique for the resolution of the many-body Wigner equation has been introduced and successfully applied to the simulation of distinguishable, spinless particles. This numerical approach presents several advantages over other methods. Indeed, it is based on an intuitive formalism in which quantum systems are described in terms of a quasi-distribution function, and highly scalable due to its Monte Carlo nature.more » In this work, we extend the many-body Wigner Monte Carlo method to the simulation of indistinguishable fermions. To this end, we first show how fermions are incorporated into the Wigner formalism. Then we demonstrate that the Pauli exclusion principle is intrinsic to the formalism. As a matter of fact, a numerical simulation of two strongly interacting fermions (electrons) is performed which clearly shows the appearance of a Fermi (or exchange–correlation) hole in the phase-space, a clear signature of the presence of the Pauli principle. To conclude, we simulate 4, 8 and 16 non-interacting fermions, isolated in a closed box, and show that, as the number of fermions increases, we gradually recover the Fermi–Dirac statistics, a clear proof of the reliability of our proposed method for the treatment of indistinguishable particles.« less
Probabilistic Thermal Analysis During Mars Reconnaissance Orbiter Aerobraking
NASA Technical Reports Server (NTRS)
Dec, John A.
2007-01-01
A method for performing a probabilistic thermal analysis during aerobraking has been developed. The analysis is performed on the Mars Reconnaissance Orbiter solar array during aerobraking. The methodology makes use of a response surface model derived from a more complex finite element thermal model of the solar array. The response surface is a quadratic equation which calculates the peak temperature for a given orbit drag pass at a specific location on the solar panel. Five different response surface equations are used, one of which predicts the overall maximum solar panel temperature, and the remaining four predict the temperatures of the solar panel thermal sensors. The variables used to define the response surface can be characterized as either environmental, material property, or modeling variables. Response surface variables are statistically varied in a Monte Carlo simulation. The Monte Carlo simulation produces mean temperatures and 3 sigma bounds as well as the probability of exceeding the designated flight allowable temperature for a given orbit. Response surface temperature predictions are compared with the Mars Reconnaissance Orbiter flight temperature data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curotto, E., E-mail: curotto@arcadia.edu
2015-12-07
Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li{sup +}(CH{sub 3}NO{sub 2}){sub n} (n = 1–20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first “magic number” is identified using the adiabatic solvent dissociation energy, and it marksmore » the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.« less
NASA Astrophysics Data System (ADS)
Palit, S.; Basak, T.; Mondal, S. K.; Pal, S.; Chakrabarti, S. K.
2013-09-01
X-ray photons emitted during solar flares cause ionization in the lower ionosphere (~60 to 100 km) in excess of what is expected to occur due to a quiet sun. Very low frequency (VLF) radio wave signals reflected from the D-region of the ionosphere are affected by this excess ionization. In this paper, we reproduce the deviation in VLF signal strength during solar flares by numerical modeling. We use GEANT4 Monte Carlo simulation code to compute the rate of ionization due to a M-class flare and a X-class flare. The output of the simulation is then used in a simplified ionospheric chemistry model to calculate the time variation of electron density at different altitudes in the D-region of the ionosphere. The resulting electron density variation profile is then self-consistently used in the LWPC code to obtain the time variation of the change in VLF signal. We did the modeling of the VLF signal along the NWC (Australia) to IERC/ICSP (India) propagation path and compared the results with observations. The agreement is found to be very satisfactory.
Polymer translocation through a nanopore: a showcase of anomalous diffusion.
Milchev, A; Dubbeldam, Johan L A; Rostiashvili, Vakhtang G; Vilgis, Thomas A
2009-04-01
We investigate the translocation dynamics of a polymer chain threaded through a membrane nanopore by a chemical potential gradient that acts on the chain segments inside the pore. By means of diverse methods (scaling theory, fractional calculus, and Monte Carlo and molecular dynamics simulations), we demonstrate that the relevant dynamic variable, the transported number of polymer segments, s(t), displays an anomalous diffusive behavior, both with and without an external driving force being present. We show that in the absence of drag force the time tau, needed for a macromolecule of length N to thread from the cis into the trans side of a cell membrane, scales as tauN(2/alpha) with the chain length. The anomalous dynamics of the translocation process is governed by a universal exponent alpha= 2/(2nu + 2 - gamma(1)), which contains the basic universal exponents of polymer physics, nu (the Flory exponent) and gamma(1) (the surface entropic exponent). A closed analytic expression for the probability to find s translocated segments at time t in terms of chain length N and applied drag force f is derived from the fractional Fokker-Planck equation, and shown to provide analytic results for the time variation of the statistical moments and . It turns out that the average translocation time scales as tau proportional, f(-1)N(2/alpha-1). These results are tested and found to be in perfect agreement with extensive Monte Carlo and molecular dynamics computer simulations.
Yang, Y; Pan, L; Lightstone, F C; Merz, K M
2016-01-01
The potential of mean force simulations, widely applied in Monte Carlo or molecular dynamics simulations, are useful tools to examine the free energy variation as a function of one or more specific reaction coordinate(s) for a given system. Implementation of the potential of mean force in the simulations of biological processes, such as enzyme catalysis, can help overcome the difficulties of sampling specific regions on the energy landscape and provide useful insights to understand the catalytic mechanism. The potential of mean force simulations usually require many, possibly parallelizable, short simulations instead of a few extremely long simulations and, therefore, are fairly manageable for most research facilities. In this chapter, we provide detailed protocols for applying the potential of mean force simulations to investigate enzymatic mechanisms for several different enzyme systems. © 2016 Elsevier Inc. All rights reserved.
X-ray Emission Line Anisotropy Effects on the Isoelectronic Temperature Measurement Method
NASA Astrophysics Data System (ADS)
Liedahl, Duane; Barrios, Maria; Brown, Greg; Foord, Mark; Gray, William; Hansen, Stephanie; Heeter, Robert; Jarrott, Leonard; Mauche, Christopher; Moody, John; Schneider, Marilyn; Widmann, Klaus
2016-10-01
Measurements of the ratio of analogous emission lines from isoelectronic ions of two elements form the basis of the isoelectronic method of inferring electron temperatures in laser-produced plasmas, with the expectation that atomic modeling errors cancel to first order. Helium-like ions are a common choice in many experiments. Obtaining sufficiently bright signals often requires sample sizes with non-trivial line optical depths. For lines with small destruction probabilities per scatter, such as the 1s2p-1s2 He-like resonance line, repeated scattering can cause a marked angular dependence in the escaping radiation. Isoelectronic lines from near-Z equimolar dopants have similar optical depths and similar angular variations, which leads to a near angular-invariance for their line ratios. Using Monte Carlo simulations, we show that possible ambiguities associated with anisotropy in deriving electron temperatures from X-ray line ratios are minimized by exploiting this isoelectronic invariance.
MCNP capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less
Salazar, Daniela A; Fontúrbel, Francisco E
2016-09-01
Habitat structure determines species occurrence and behavior. However, human activities are altering natural habitat structure, potentially hampering native species due to the loss of nesting cavities, shelter or movement pathways. The South American temperate rainforest is experiencing an accelerated loss and degradation, compromising the persistence of many native species, and particularly of the monito del monte (Dromiciops gliroides Thomas, 1894), an arboreal marsupial that plays a key role as seed disperser. Aiming to compare 2 contrasting habitats (a native forest and a transformed habitat composed of abandoned Eucalyptus plantations and native understory vegetation), we assessed D. gliroides' occurrence using camera traps and measured several structural features (e.g. shrub and bamboo cover, deadwood presence, moss abundance) at 100 camera locations. Complementarily, we used radio telemetry to assess its spatial ecology, aiming to depict a more complete scenario. Moss abundance was the only significant variable explaining D. gliroides occurrence between habitats, and no structural variable explained its occurrence at the transformed habitat. There were no differences in home range, core area or inter-individual overlapping. In the transformed habitats, tracked individuals used native and Eucalyptus-associated vegetation types according to their abundance. Diurnal locations (and, hence, nesting sites) were located exclusively in native vegetation. The landscape heterogeneity resulting from the vicinity of native and Eucalyptus-associated vegetation likely explains D. gliroides occurrence better than the habitat structure itself, as it may be use Eucalyptus-associated vegetation for feeding purposes but depend on native vegetation for nesting. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
ERIC Educational Resources Information Center
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Franz, E; Tromp, S O; Rijgersberg, H; van der Fels-Klerx, H J
2010-02-01
Fresh vegetables are increasingly recognized as a source of foodborne outbreaks in many parts of the world. The purpose of this study was to conduct a quantitative microbial risk assessment for Escherichia coli O157:H7, Salmonella, and Listeria monocytogenes infection from consumption of leafy green vegetables in salad from salad bars in The Netherlands. Pathogen growth was modeled in Aladin (Agro Logistics Analysis and Design Instrument) using time-temperature profiles in the chilled supply chain and one particular restaurant with a salad bar. A second-order Monte Carlo risk assessment model was constructed (using @Risk) to estimate the public health effects. The temperature in the studied cold chain was well controlled below 5 degrees C. Growth of E. coli O157:H7 and Salmonella was minimal (17 and 15%, respectively). Growth of L. monocytogenes was considerably greater (194%). Based on first-order Monte Carlo simulations, the average number of cases per year in The Netherlands associated the consumption leafy greens in salads from salad bars was 166, 187, and 0.3 for E. coli O157:H7, Salmonella, and L. monocytogenes, respectively. The ranges of the average number of annual cases as estimated by second-order Monte Carlo simulation (with prevalence and number of visitors as uncertain variables) were 42 to 551 for E. coli O157:H7, 81 to 281 for Salmonella, and 0.1 to 0.9 for L. monocytogenes. This study included an integration of modeling pathogen growth in the supply chain of fresh leafy vegetables destined for restaurant salad bars using software designed to model and design logistics and modeling the public health effects using probabilistic risk assessment software.
Latitudinal variation in population structure of wintering Pacific Black Brant
Schamber, J.L.; Sedinger, J.S.; Ward, D.H.; Hagmeier, K.R.
2007-01-01
Latitudinal variation in population structure during the winter has been reported in many migratory birds, but has been documented in few species of waterfowl. Variation in environmental and social conditions at wintering sites can potentially influence the population dynamics of differential migrants. We examined latitudinal variation in sex and age classes of wintering Pacific Black Brant (Branta bernicla nigricans). Brant are distributed along a wide latitudinal gradient from Alaska to Mexico during the winter. Accordingly, migration distances for brant using different wintering locations are highly variable and winter settlement patterns are likely associated with a spatially variable food resource. We used resightings of brant banded in southwestern Alaska to examine sex and age ratios of birds wintering at Boundary Bay in British Columbia, and at San Quintin Bay, Ojo de Liebre Lagoon, and San Ignacio Lagoon in Baja California from 1998 to 2000. Sex ratios were similar among wintering locations for adults and were consistent with the mating strategy of geese. The distribution of juveniles varied among wintering areas, with greater proportions of juveniles observed at northern (San Quintin Bay and Ojo de Liebre Lagoon) than at southern (San Ignacio Lagoon) locations in Baja California. We suggest that age-related variation in the winter distribution of Pacific Black Brant is mediated by variation in productivity among individuals at different wintering locations and by social interactions among wintering family groups.
Insulin aspart pharmacokinetics: an assessment of its variability and underlying mechanisms.
Rasmussen, Christian Hove; Røge, Rikke Meldgaard; Ma, Zhulin; Thomsen, Maria; Thorisdottir, Rannveig Linda; Chen, Jian-Wen; Mosekilde, Erik; Colding-Jørgensen, Morten
2014-10-01
Insulin aspart (IAsp) is used by many diabetics as a meal-time insulin to control post-prandial glucose levels. As is the case with many other insulin types, the pharmacokinetics (PK), and consequently the pharmacodynamics (PD), is associated with clinical variability, both between and within individuals. The present article identifies the main physiological mechanisms that govern the PK of IAsp following subcutaneous administration and quantifies them in terms of their contribution to the overall variability. CT scanning data from Thomsen et al. (2012) are used to investigate and quantify the properties of the subcutaneous depot. Data from Brange et al. (1990) are used to determine the effects of insulin chemistry in subcutis on the absorption rate. Intravenous (i.v.) bolus and infusion PK data for human insulin are used to understand and quantify the systemic distribution and elimination (Pørksen et al., 1997; Sjöstrand et al., 2002). PK and PD profiles for type 1 diabetics from Chen et al. (2005) are analyzed to demonstrate the effects of IAsp antibodies in terms of bound and unbound insulin. PK profiles from Thorisdottir et al. (2009) and Ma et al. (2012b) are analyzed in the nonlinear mixed effects software Monolix® to determine the presence and effects of the mechanisms described in this article. The distribution of IAsp in the subcutaneous depot show an initial dilution of approximately a factor of two in a single experiment. Injected insulin hexamers exist in a chemical equilibrium with monomers and dimers, which depends strongly on the degree of dilution in subcutis, the presence of auxiliary substances, and a variety of other factors. Sensitivity to the initial dilution in subcutis can thus be a cause of some of the variability. Temporal variations in the PK are explained by variations in the subcutaneous blood flow. IAsp antibodies are found to be a large contributor to the variability of total insulin PK in a study by Chen et al. (2005), since only the free fraction is eliminated via the receptors. The contribution of these and other sources of variability to the total variability is quantified via a population PK analysis and two recent clinical studies (Thorisdottir et al., 2009; Ma et al., 2012b), which support the presence and significance of the identified mechanisms. IAsp antibody binding, oligomeric transitions in subcutis, and blood flow dependent variations in absorption rate seem to dominate the PK variability of IAsp. It may be possible via e.g. formulation design to reduce some of these variability factors. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander
2015-02-28
Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
Pe’eri, Shachak; Thein, May-Win; Rzhanov, Yuri; Celikkol, Barbaros; Swift, M. Robinson
2017-01-01
This paper presents a proof-of-concept optical detector array sensor system to be used in Unmanned Underwater Vehicle (UUV) navigation. The performance of the developed optical detector array was evaluated for its capability to estimate the position, orientation and forward velocity of UUVs with respect to a light source fixed in underwater. The evaluations were conducted through Monte Carlo simulations and empirical tests under a variety of motion configurations. Monte Carlo simulations also evaluated the system total propagated uncertainty (TPU) by taking into account variations in the water column turbidity, temperature and hardware noise that may degrade the system performance. Empirical tests were conducted to estimate UUV position and velocity during its navigation to a light beacon. Monte Carlo simulation and empirical results support the use of the detector array system for optics based position feedback for UUV positioning applications. PMID:28758936
Changes of arthropod diversity across an altitudinal ecoregional zonation in Northwestern Argentina
González-Reyes, Andrea X.; Rodriguez-Artigas, Sandra M.
2017-01-01
This study examined arthropod community patterns over an altitudinal ecoregional zonation that extended through three ecoregions (Yungas, Monte de Sierras y Bolsones, and Puna) and two ecotones (Yungas-Monte and Prepuna) of Northwestern Argentina (altitudinal range of 2,500 m), and evaluated the abiotic and biotic factors and the geographical distance that could influence them. Pitfall trap and suction samples were taken seasonally in 15 sampling sites (1,500–4,000 m a.s.l) during one year. In addition to climatic variables, several soil and vegetation variables were measured in the field. Values obtained for species richness between ecoregions and ecotones and by sampling sites were compared statistically and by interpolation–extrapolation analysis based on individuals at the same sample coverage level. Effects of predictor variables and the similarity of arthropods were shown using non-metric multidimensional scaling, and the resulting groups were evaluated using a multi-response permutation procedure. Polynomial regression was used to evaluate the relationship between altitude with total species richness and those of hyperdiverse/abundant higher taxa and the latter taxa with each predictor variable. The species richness pattern displayed a decrease in species diversity as the elevation increased at the bottom wet part (Yungas) of our altitudinal zonation until the Monte, and a unimodal pattern of diversity in the top dry part (Monte, Puna). Each ecoregion and ecotonal zone evidenced a particular species richness and assemblage of arthropods, but the latter ones displayed a high percentage of species shared with the adjacent ecoregions. The arthropod elevational pattern and the changes of the assemblages were explained by the environmental gradient (especially the climate) in addition to a geographic gradient (the distance of decay of similarity), demonstrating that the species turnover is important to explain the beta diversity along the elevational gradient. This suggests that patterns of diversity and distribution of arthropods are regulated by the dissimilarity of ecoregional environments that establish a wide range of geographic and environmental barriers, coupled with a limitation of species dispersal. Therefore, the arthropods of higher taxa respond differently to the altitudinal ecoregional zonation. PMID:29230361
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less
Maloney, K.O.; Feminella, J.W.; Mitchell, R.M.; Miller, S.A.; Mulholland, P.J.; Houser, J.N.
2008-01-01
The concept of landscape legacies has been examined extensively in terrestrial ecosystems and has led to a greater understanding of contemporary ecosystem processes. However, although stream ecosystems are tightly coupled with their catchments and, thus, probably are affected strongly by historical catchment conditions, few studies have directly examined the importance of landuse legacies on streams. We examined relationships between historical land use (1944) and contemporary (2000-2003) stream physical, chemical, and biological conditions after accounting for the influences of contemporary land use (1999) and natural landscape (catchment size) variation in 12 small streams at Fort Benning, Georgia, USA. Most stream variables showed strong relationships with contemporary land use and catchment size; however, after accounting for these factors, residual variation in many variables remained significantly related to historical land use. Residual variation in benthic particulate organic matter, diatom density, % of diatoms in Eunotia spp., fish density in runs, and whole-stream gross primary productivity correlated negatively, whereas streamwater pH correlated positively, with residual variation in fraction of disturbed land in catchments in 1944 (i.e., bare ground and unpaved road cover). Residual variation in % recovering land (i.e., early successional vegetation) in 1944 was correlated positively with residual variation in streambed instability, a macroinvertebrate biotic index, and fish richness, but correlated negatively with residual variation in most benthic macroinvertebrate metrics examined (e.g., Chironomidae and total richness, Shannon diversity). In contrast, residual variation in whole-stream respiration rates was not explained by historical land use. Our results suggest that historical land use continues to influence important physical and chemical variables in these streams, and in turn, probably influences associated biota. Beyond providing insight into biotic interactions and their associations with environmental conditions, identification of landuse legacies also will improve understanding of stream impairment in contemporary minimally disturbed catchments, enabling more accurate assessment of reference conditions in studies of biotic integrity and restoration. ?? 2008 by The North American Benthological Society.
Water quality modeling in the dead end sections of drinking water (Supplement)
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used tocalibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variation
Water Quality Modeling in the Dead End Sections of Drinking ...
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of a distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations
NASA Astrophysics Data System (ADS)
Snow, Michael G.; Bajaj, Anil K.
2015-08-01
This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.
Nonlinear probabilistic finite element models of laminated composite shells
NASA Technical Reports Server (NTRS)
Engelstad, S. P.; Reddy, J. N.
1993-01-01
A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.
Mean centering helps alleviate "micro" but not "macro" multicollinearity.
Iacobucci, Dawn; Schneider, Matthew J; Popovich, Deidre L; Bakamitsos, Georgios A
2016-12-01
There seems to be confusion among researchers regarding whether it is good practice to center variables at their means prior to calculating a product term to estimate an interaction in a multiple regression model. Many researchers use mean centered variables because they believe it's the thing to do or because reviewers ask them to, without quite understanding why. Adding to the confusion is the fact that there is also a perspective in the literature that mean centering does not reduce multicollinearity. In this article, we clarify the issues and reconcile the discrepancy. We distinguish between "micro" and "macro" definitions of multicollinearity and show how both sides of such a debate can be correct. To do so, we use proofs, an illustrative dataset, and a Monte Carlo simulation to show the precise effects of mean centering on both individual correlation coefficients as well as overall model indices. We hope to contribute to the literature by clarifying the issues, reconciling the two perspectives, and quelling the current confusion regarding whether and how mean centering can be a useful practice.
Deriving photometric redshifts using fuzzy archetypes and self-organizing maps - I. Methodology
NASA Astrophysics Data System (ADS)
Speagle, Joshua S.; Eisenstein, Daniel J.
2017-07-01
We propose a method to substantially increase the flexibility and power of template fitting-based photometric redshifts by transforming a large number of galaxy spectral templates into a corresponding collection of 'fuzzy archetypes' using a suitable set of perturbative priors designed to account for empirical variation in dust attenuation and emission-line strengths. To bypass widely separated degeneracies in parameter space (e.g. the redshift-reddening degeneracy), we train self-organizing maps (SOMs) on large 'model catalogues' generated from Monte Carlo sampling of our fuzzy archetypes to cluster the predicted observables in a topologically smooth fashion. Subsequent sampling over the SOM then allows full reconstruction of the relevant probability distribution functions (PDFs). This combined approach enables the multimodal exploration of known variation among galaxy spectral energy distributions with minimal modelling assumptions. We demonstrate the power of this approach to recover full redshift PDFs using discrete Markov chain Monte Carlo sampling methods combined with SOMs constructed from Large Synoptic Survey Telescope ugrizY and Euclid YJH mock photometry.
NASA Astrophysics Data System (ADS)
Najafi, Amin
2014-05-01
Using the Monte Carlo simulations, we have calculated mean-square fluctuations in statistical mechanics, such as those for colloids energy configuration are set on square 2D periodic substrates interacting via a long range screened Coulomb potential on any specific and fixed substrate. Random fluctuations with small deviations from the state of thermodynamic equilibrium arise from the granular structure of them and appear as thermal diffusion with Gaussian distribution structure as well. The variations are showing linear form of the Fluctuation-Dissipation Theorem on the energy of particles constitutive a canonical ensemble with continuous diffusion process of colloidal particle systems. The noise-like variation of the energy per particle and the order parameter versus the Brownian displacement of sum of large number of random steps of particles at low temperatures phase are presenting a markovian process on colloidal particles configuration, too.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.
2014-01-01
The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
USDA-ARS?s Scientific Manuscript database
Seasonal variation of vitamin C in fresh fruits and vegetables is not reflected in food composition database average values, yet many factors influence content and retention. Fresh fruits and vegetables were sampled on three occasions in each season, from the same local retail outlets, for one or tw...
Added value from 576 years of tree-ring records in the prediction of the Great Salt Lake level
Robert R. Gillies; Oi-Yu Chung; S.-Y. Simon Wang; R. Justin DeRose; Yan Sun
2015-01-01
Predicting lake level fluctuations of the Great Salt Lake (GSL) in Utah - the largest terminal salt-water lake in the Western Hemisphere - is critical from many perspectives. The GSL integrates both climate and hydrological variations within the region and is particularly sensitive to low-frequency climate cycles. Since most hydroclimate variable records cover...
Justin S. Crotteau; Martin W. Ritchie; J. Morgan Varner
2014-01-01
Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...
Zachery A. Holden; Michael A. Crimmins; Samuel A. Cushman; Jeremy S. Littell
2010-01-01
Accurate, fine spatial resolution predictions of surface air temperatures are critical for understanding many hydrologic and ecological processes. This study examines the spatial and temporal variability in nocturnal air temperatures across a mountainous region of Northern Idaho. Principal components analysis (PCA) was applied to a network of 70 Hobo temperature...
Monitoring of oceanographic properties of Glacier Bay, Alaska 2004
Madison, Erica N.; Etherington, Lisa L.
2005-01-01
Glacier Bay is a recently (300 years ago) deglaciated fjord estuarine system that has multiple sills, very deep basins, tidewater glaciers, and many streams. Glacier Bay experiences a large amount of runoff, high sedimentation, and large tidal variations. High freshwater discharge due to snow and ice melt and the presence of the tidewater glaciers makes the bay extremely cold. There are many small- and large-scale mixing and upwelling zones at sills, glacial faces, and streams. The complex topography and strong currents lead to highly variable salinity, temperature, sediment, primary productivity, light penetration, stratification levels, and current patterns within a small area. The oceanographic patterns within Glacier Bay drive a large portion of the spatial and temporal variability of the ecosystem. It has been widely recognized by scientists and resource managers in Glacier Bay that a program to monitor oceanographic patterns is essential for understanding the marine ecosystem and to differentiate between anthropogenic disturbance and natural variation. This year’s sampling marks the 12th continuous year of monitoring the oceanographic conditions at 23 stations along the primary axes within Glacier Bay, AK, making this a very unique and valuable data set in terms of its spatial and temporal coverage.
The Impact of Variable Wind Shear Coefficients on Risk Reduction of Wind Energy Projects
Thomson, Allan; Yoonesi, Behrang; McNutt, Josiah
2016-01-01
Estimation of wind speed at proposed hub heights is typically achieved using a wind shear exponent or wind shear coefficient (WSC), variation in wind speed as a function of height. The WSC is subject to temporal variation at low and high frequencies, ranging from diurnal and seasonal variations to disturbance caused by weather patterns; however, in many cases, it is assumed that the WSC remains constant. This assumption creates significant error in resource assessment, increasing uncertainty in projects and potentially significantly impacting the ability to control gird connected wind generators. This paper contributes to the body of knowledge relating to the evaluation and assessment of wind speed, with particular emphasis on the development of techniques to improve the accuracy of estimated wind speed above measurement height. It presents an evaluation of the use of a variable wind shear coefficient methodology based on a distribution of wind shear coefficients which have been implemented in real time. The results indicate that a VWSC provides a more accurate estimate of wind at hub height, ranging from 41% to 4% reduction in root mean squared error (RMSE) between predicted and actual wind speeds when using a variable wind shear coefficient at heights ranging from 33% to 100% above the highest actual wind measurement. PMID:27872898
Benchmarking child and adolescent mental health organizations.
Brann, Peter; Walter, Garry; Coombs, Tim
2011-04-01
This paper describes aspects of the child and adolescent benchmarking forums that were part of the National Mental Health Benchmarking Project (NMHBP). These forums enabled participating child and adolescent mental health organizations to benchmark themselves against each other, with a view to understanding variability in performance against a range of key performance indicators (KPIs). Six child and adolescent mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against relevant KPIs. They also undertook two special projects designed to help them understand the variation in performance on given KPIs. There was considerable inter-organization variability on many of the KPIs. Even within organizations, there was often substantial variability over time. The variability in indicator data raised many questions for participants. This challenged participants to better understand and describe their local processes, prompted them to collect additional data, and stimulated them to make organizational comparisons. These activities fed into a process of reflection about their performance. Benchmarking has the potential to illuminate intra- and inter-organizational performance in the child and adolescent context.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Suarez, M. J.; Heiser, M.
1998-01-01
In an earlier GCM study, we showed that interactive land surface processes generally contribute more to continental precipitation variance than do variable sea surface temperatures (SSTs). A new study extends this result through an analysis of 16-member ensembles of multi-decade GCM simulations. We can now show that in many regions, although land processes determine the amplitude of the interannual precipitation anomalies, variable SSTs nevertheless control their timing. The GCM data can be processed into indices that describe geographical variations in (1) the potential for seasonal-to-interannual prediction, and (2) the extent to which the predictability relies on the proper representation of land-atmosphere feedback.
Calculating Relativistic Transition Matrix Elements for Hydrogenic Atoms Using Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Alexander, Steven; Coldwell, R. L.
2015-03-01
The nonrelativistic transition matrix elements for hydrogen atoms can be computed exactly and these expressions are given in a number of classic textbooks. The relativistic counterparts of these equations can also be computed exactly but these expressions have been described in only a few places in the literature. In part, this is because the relativistic equations lack the elegant simplicity of the nonrelativistic equations. In this poster I will describe how variational Monte Carlo methods can be used to calculate the energy and properties of relativistic hydrogen atoms and how the wavefunctions for these systems can be used to calculate transition matrix elements.
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
Status of the EDDA experiment at COSY
NASA Astrophysics Data System (ADS)
Scobel, W.; EDDA Collaboration; Bisplinghoff, J.; Bollmann, R.; Cloth, P.; Dohrmann, F.; Dorner, G.; Drüke, V.; Ernst, J.; Eversheim, P. D.; Filges, D.; Gasthuber, M.; Gebel, R.; Groß, A.; Groß-Hardt, R.; Hinterberger, F.; Jahn, R.; Lahr, U.; Langkau, R.; Lippert, G.; Mayer-Kuckuk, T.; Maschuw, R.; Mertler, G.; Metsch, B.; Mosel, F.; Paetz gen Schieck, H.; Petry, H. R.; Prasuhn, D.; von Przewoski, B.; Rohdjeß, H.; Rosendaal, D.; von Rossen, P.; Scheid, H.; Schirm, N.; Schwandt, F.; Stein, H.; Theis, D.; Weber, J.; Wiedmann, W.; Woller, K.; Ziegler, R.
1993-07-01
The EDDA experiment is designed to study p + p excitation functions with high energy resolution and narrow step size in the kinetic energy range from 250 MeV to 2500 MeV at the Cooler Synchrotron COSY. Measurements during the accelertion phase in conjunction with internal targets will allow to achieve a fast and precise energy variation. Prototypes of the detector elements and the fiber target have been extensively tested with proton and electron beams; the detector performance and trigger efficiency have been studied in Monte Carlo simulations. In this contribution, results concerning detector design, prototype studies, Monte Carlo simulations and the anticipated detector resolutions will be reported.
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Mital, Subodh K.; Bednarcyk, Brett A.; Arnold, Steven M.
2015-01-01
Constituent properties, along with volume fraction, have a first order effect on the microscale fields within a composite material and influence the macroscopic response. Therefore, there is a need to assess the significance of stochastic variation in the constituent properties of composites at the higher scales. The effect of variability in the parameters controlling the time-dependent behavior, in a unidirectional SCS-6 SiC fiber-reinforced RBSN matrix composite lamina, on the residual stresses induced during processing is investigated numerically. The generalized method of cells micromechanics theory is utilized to model the ceramic matrix composite lamina using a repeating unit cell. The primary creep phases of the constituents are approximated using a Norton-Bailey, steady state, power law creep model. The effect of residual stresses on the proportional limit stress and strain to failure of the composite is demonstrated. Monte Carlo simulations were conducted using a normal distribution for the power law parameters and the resulting residual stress distributions were predicted.
Online quantitative analysis of multispectral images of human body tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.
2013-08-01
A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, C. F.; Zhao, T. Z.; Behm, K.
Here, bright and ultrashort duration x-ray pulses can be produced by through betatron oscillations of electrons during laser wakefield acceleration (LWFA). Our experimental measurements using the Hercules laser system demonstrate a dramatic increase in x-ray flux for interaction distances beyond the depletion/dephasing lengths, where the initial electron bunch injected into the first wake bucket catches up with the laser pulse front and the laser pulse depletes. A transition from an LWFA regime to a beam-driven plasma wakefield acceleration regime consequently occurs. The drive electron bunch is susceptible to the electron-hose instability and rapidly develops large amplitude oscillations in its tail,more » which leads to greatly enhanced x-ray radiation emission. We measure the x-ray flux as a function of acceleration length using a variable length gas cell. 3D particle-in-cell simulations using a Monte Carlo synchrotron x-ray emission algorithm elucidate the time-dependent variations in the radiation emission processes.« less
NASA Astrophysics Data System (ADS)
Dong, C. F.; Zhao, T. Z.; Behm, K.; Cummings, P. G.; Nees, J.; Maksimchuk, A.; Yanovsky, V.; Krushelnick, K.; Thomas, A. G. R.
2018-04-01
Bright and ultrashort duration x-ray pulses can be produced by through betatron oscillations of electrons during laser wakefield acceleration (LWFA). Our experimental measurements using the Hercules laser system demonstrate a dramatic increase in x-ray flux for interaction distances beyond the depletion/dephasing lengths, where the initial electron bunch injected into the first wake bucket catches up with the laser pulse front and the laser pulse depletes. A transition from an LWFA regime to a beam-driven plasma wakefield acceleration regime consequently occurs. The drive electron bunch is susceptible to the electron-hose instability and rapidly develops large amplitude oscillations in its tail, which leads to greatly enhanced x-ray radiation emission. We measure the x-ray flux as a function of acceleration length using a variable length gas cell. 3D particle-in-cell simulations using a Monte Carlo synchrotron x-ray emission algorithm elucidate the time-dependent variations in the radiation emission processes.
Automated design evolution of stereochemically randomized protein foldamers
NASA Astrophysics Data System (ADS)
Ranbhor, Ranjit; Kumar, Anil; Patel, Kirti; Ramakrishnan, Vibin; Durani, Susheel
2018-05-01
Diversification of chain stereochemistry opens up the possibilities of an ‘in principle’ increase in the design space of proteins. This huge increase in the sequence and consequent structural variation is aimed at the generation of smart materials. To diversify protein structure stereochemically, we introduced L- and D-α-amino acids as the design alphabet. With a sequence design algorithm, we explored the usage of specific variables such as chirality and the sequence of this alphabet in independent steps. With molecular dynamics, we folded stereochemically diverse homopolypeptides and evaluated their ‘fitness’ for possible design as protein-like foldamers. We propose a fitness function to prune the most optimal fold among 1000 structures simulated with an automated repetitive simulated annealing molecular dynamics (AR-SAMD) approach. The highly scored poly-leucine fold with sequence lengths of 24 and 30 amino acids were later sequence-optimized using a Dead End Elimination cum Monte Carlo based optimization tool. This paper demonstrates a novel approach for the de novo design of protein-like foldamers.
USING LEAKED POWER TO MEASURE INTRINSIC AGN POWER SPECTRA OF RED-NOISE TIME SERIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, S. F.; Xue, Y. Q., E-mail: zshifu@mail.ustc.edu.cn, E-mail: xuey@ustc.edu.cn
Fluxes emitted at different wavebands from active galactic nuclei (AGNs) fluctuate at both long and short timescales. The variation can typically be characterized by a broadband power spectrum, which exhibits a red-noise process at high frequencies. The standard method of estimating the power spectral density (PSD) of AGN variability is easily affected by systematic biases such as red-noise leakage and aliasing, in particular when the observation spans a relatively short period and is gapped. Focusing on the high-frequency PSD that is strongly distorted due to red-noise leakage and usually not significantly affected by aliasing, we develop a novel and observablemore » normalized leakage spectrum (NLS), which sensitively describes the effects of leaked red-noise power on the PSD at different temporal frequencies. Using Monte Carlo simulations, we demonstrate how an AGN underlying PSD sensitively determines the NLS when there is severe red-noise leakage, and thereby how the NLS can be used to effectively constrain the underlying PSD.« less
Dong, C. F.; Zhao, T. Z.; Behm, K.; ...
2018-04-24
Here, bright and ultrashort duration x-ray pulses can be produced by through betatron oscillations of electrons during laser wakefield acceleration (LWFA). Our experimental measurements using the Hercules laser system demonstrate a dramatic increase in x-ray flux for interaction distances beyond the depletion/dephasing lengths, where the initial electron bunch injected into the first wake bucket catches up with the laser pulse front and the laser pulse depletes. A transition from an LWFA regime to a beam-driven plasma wakefield acceleration regime consequently occurs. The drive electron bunch is susceptible to the electron-hose instability and rapidly develops large amplitude oscillations in its tail,more » which leads to greatly enhanced x-ray radiation emission. We measure the x-ray flux as a function of acceleration length using a variable length gas cell. 3D particle-in-cell simulations using a Monte Carlo synchrotron x-ray emission algorithm elucidate the time-dependent variations in the radiation emission processes.« less
Quantum Phase Transitions in the Bose Hubbard Model and in a Bose-Fermi Mixture
NASA Astrophysics Data System (ADS)
Duchon, Eric Nicholas
Ultracold atomic gases may be the ultimate quantum simulator. These isolated systems have the lowest temperatures in the observable universe, and their properties and interactions can be precisely and accurately tuned across a full spectrum of behaviors, from few-body physics to highly-correlated many-body effects. The ability to impose potentials on and tune interactions within ultracold gases to mimic complex systems mean they could become a theorist's playground. One of their great strengths, however, is also one of the largest obstacles to this dream: isolation. This thesis touches on both of these themes. First, methods to characterize phases and quantum critical points, and to construct finite temperature phase diagrams using experimentally accessible observables in the Bose Hubbard model are discussed. Then, the transition from a weakly to a strongly interacting Bose-Fermi mixture in the continuum is analyzed using zero temperature numerical techniques. Real materials can be emulated by ultracold atomic gases loaded into optical lattice potentials. We discuss the characteristics of a single boson species trapped in an optical lattice (described by the Bose Hubbard model) and the hallmarks of the quantum critical region that separates the superfluid and the Mott insulator ground states. We propose a method to map the quantum critical region using the single, experimentally accessible, local quantity R, the ratio of compressibility to local number fluctuations. The procedure to map a phase diagram with R is easily generalized to inhomogeneous systems and generic many-body Hamiltonians. We illustrate it here using quantum Monte Carlo simulations of the 2D Bose Hubbard model. Secondly, we investigate the transition from a degenerate Fermi gas weakly coupled to a Bose Einstein condensate to the strong coupling limit of composite boson-fermion molecules. We propose a variational wave function to investigate the ground state properties of such a Bose-Fermi mixture with equal population, as a function of increasing attraction between bosons and fermions. The variational wave function captures the weak and the strong coupling limits and at intermediate coupling we make two predictions using zero temperature quantum Monte Carlo methods: (I) a complete destruction of the atomic Fermi surface and emergence of a molecular Fermi sea that coexists with a remnant of the Bose-Einstein condensate, and (II) evidence for enhanced short-ranged fermion-fermion correlations mediated by bosons.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
ERIC Educational Resources Information Center
Spearing, Debra; Woehlke, Paula
To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
NASA Astrophysics Data System (ADS)
César Mansur Filho, Júlio; Dickman, Ronald
2011-05-01
We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\
LP-search and its use in analysis of the accuracy of control systems with acoustical models
NASA Technical Reports Server (NTRS)
Sergeyev, V. I.; Sobol, I. M.; Statnikov, R. B.; Statnikov, I. N.
1973-01-01
The LP-search is proposed as an analog of the Monte Carlo method for finding values in nonlinear statistical systems. It is concluded that: To attain the required accuracy in solution to the problem of control for a statistical system in the LP-search, a considerably smaller number of tests is required than in the Monte Carlo method. The LP-search allows the possibility of multiple repetitions of tests under identical conditions and observability of the output variables of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
Data re-arranging techniques leading to proper variable selections in high energy physics
NASA Astrophysics Data System (ADS)
Kůs, Václav; Bouř, Petr
2017-12-01
We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.
Scribner, Kim T.; Garner, G.W.; Amstrup, Steven C.; Cronin, M.A.; Dizon, Andrew E.; Chivers, Susan J.; Perrin, William F.
1997-01-01
A summary of existing population genetics literature is presented for polar bears (Ursus maritimus) and interpreted in the context of the species' life-history characteristics and regional heterogeneity in environmental regimes and movement patterns. Several nongenetic data sets including morphology, contaminant levels, geographic variation in reproductive characteristics, and the location and distribution of open-water foraging habitat suggest some degree of spatial structuring. Eleven populations are recognized by the IUCN Polar Bear Specialist Group. Few genetics studies exist for polar bears. Interpretation and generalizations of regional variation in intra- and interpopulation levels of genetic variability are confounded by the paucity of data from many regions and by the fact that no single informative genetic marker has been employed in multiple regions. Early allozyme studies revealed comparatively low levels of genetic variability and no compelling evidence of spatial structuring. Studies employing mitochondrial DNA (mtDNA) also found low levels of genetic variation, a lack of phylogenetic structure, and no significant evidence for spatial variation in haplotype frequency. In contrast, microsatellite variable number of tandem repeat (VNTR) loci have revealed significant heterogeneity in allele frequency among populations in the Canadian Arctic. These regions are characterized by archipelgic patterns of sea-ice movements. Further studies using highly polymorphic loci are needed in regions characterized by greater polar bear dependency on pelagic sea-ice movements and in regions for which no data currently exist (i.e., Laptev and Novaya Zemlya/Franz Josef).
Corrected goodness-of-fit test in covariance structure analysis.
Hayakawa, Kazuhiko
2018-05-17
Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Use of Monte-Carlo Simulations in Polyurethane Polymerization Processes
1995-11-01
situations, the mechanisms of molecular species diffusion must be considered. Gupta et al (Ref. 10) have demonstrated the use of Monte-Carlo simulations in...many thoughtful discussions. P154742.PDF [Page: 41 of 78] UNCLASSIFIED 29 9. 0 REFERENCES 1. Muthiah, R. M.; Krishnamurthy, V. N.; Gupta , B. R...Time Evolution of Coupled Chemical Reactions", Journal of Computational Physics, Vol. 22, 1976, pg. 403 7. Pandit,Shubhangi S.; Juvekar, Vinay A
Design of experiments on 135 cloned poplar trees to map environmental influence in greenhouse.
Pinto, Rui Climaco; Stenlund, Hans; Hertzberg, Magnus; Lundstedt, Torbjörn; Johansson, Erik; Trygg, Johan
2011-01-31
To find and ascertain phenotypic differences, minimal variation between biological replicates is always desired. Variation between the replicates can originate from genetic transformation but also from environmental effects in the greenhouse. Design of experiments (DoE) has been used in field trials for many years and proven its value but is underused within functional genomics including greenhouse experiments. We propose a strategy to estimate the effect of environmental factors with the ultimate goal of minimizing variation between biological replicates, based on DoE. DoE can be analyzed in many ways. We present a graphical solution together with solutions based on classical statistics as well as the newly developed OPLS methodology. In this study, we used DoE to evaluate the influence of plant specific factors (plant size, shoot type, plant quality, and amount of fertilizer) and rotation of plant positions on height and section area of 135 cloned wild type poplar trees grown in the greenhouse. Statistical analysis revealed that plant position was the main contributor to variability among biological replicates and applying a plant rotation scheme could reduce this variation. Copyright © 2010 Elsevier B.V. All rights reserved.
Effects of multiple predator species on green treefrog (Hyla cinerea) tadpoles
Gunzburger, M.S.; Travis, J.
2005-01-01
Prey species that occur across a range of habitats may be exposed to variable communities of multiple predator species across habitats. Predicting the combined effects of multiple predators can be complex. Many experiments evaluating the effects of multiple predators on prey confound either variation in predator density with predator identity or variation in relative predator frequency with overall predation rates. We develop a new experimental design of factorial predator combinations that maintains a constant expected predation rate, under the null hypothesis of additive predator effects. We implement this design to evaluate the combined effects of three predator species (bass, aeshnid and libellulid odonate naiads) on mortality rate of a prey species, Hyla cinerea (Schneider, 1799) tadpoles, that occurs across a range of aquatic habitats. Two predator treatments (libellulid and aeshnid + libellulid) resulted in lower tadpole mortality than any of the other predator treatments. Variation in tadpole mortality across treatments was not related to coarse variation in microhabitat use, but was likely due to intraguild predation, which occurred in all predator treatments. Hyla cinerea tadpoles have constant, low survival values when exposed to many different combinations of predator species, and predation rate probably increases linearly with predator density.
Mining geographic variations of Plasmodium vivax for active surveillance: a case study in China.
Shi, Benyun; Tan, Qi; Zhou, Xiao-Nong; Liu, Jiming
2015-05-27
Geographic variations of an infectious disease characterize the spatial differentiation of disease incidences caused by various impact factors, such as environmental, demographic, and socioeconomic factors. Some factors may directly determine the force of infection of the disease (namely, explicit factors), while many other factors may indirectly affect the number of disease incidences via certain unmeasurable processes (namely, implicit factors). In this study, the impact of heterogeneous factors on geographic variations of Plasmodium vivax incidences is systematically investigate in Tengchong, Yunnan province, China. A space-time model that resembles a P. vivax transmission model and a hidden time-dependent process, is presented by taking into consideration both explicit and implicit factors. Specifically, the transmission model is built upon relevant demographic, environmental, and biophysical factors to describe the local infections of P. vivax. While the hidden time-dependent process is assessed by several socioeconomic factors to account for the imported cases of P. vivax. To quantitatively assess the impact of heterogeneous factors on geographic variations of P. vivax infections, a Markov chain Monte Carlo (MCMC) simulation method is developed to estimate the model parameters by fitting the space-time model to the reported spatial-temporal disease incidences. Since there is no ground-truth information available, the performance of the MCMC method is first evaluated against a synthetic dataset. The results show that the model parameters can be well estimated using the proposed MCMC method. Then, the proposed model is applied to investigate the geographic variations of P. vivax incidences among all 18 towns in Tengchong, Yunnan province, China. Based on the geographic variations, the 18 towns can be further classify into five groups with similar socioeconomic causality for P. vivax incidences. Although this study focuses mainly on the transmission of P. vivax, the proposed space-time model is general and can readily be extended to investigate geographic variations of other diseases. Practically, such a computational model will offer new insights into active surveillance and strategic planning for disease surveillance and control.
Geometry and Dynamics for Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark
2018-03-01
Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.
Natural variability in Drosophila larval and pupal NaCl tolerance.
Riedl, Craig A L; Oster, Sara; Busto, Macarena; Mackay, Trudy F C; Sokolowski, Marla B
2016-05-01
The regulation of NaCl is essential for the maintenance of cellular tonicity and functionality, and excessive salt exposure has many adverse effects. The fruit fly, Drosophila melanogaster, is a good osmoregulator and some strains can survive on media with very low or high NaCl content. Previous analyses of mutant alleles have implicated various stress signaling cascades in NaCl sensitivity or tolerance; however, the genes influencing natural variability of NaCl tolerance remain for the most part unknown. Here, we use two approaches to investigate natural variation in D. melanogaster NaCl tolerance. We describe four D. melanogaster lines that were selected for different degrees of NaCl tolerance, and present data on their survival, development, and pupation position when raised on varying NaCl concentrations. After finding evidence for natural variation in salt tolerance, we present the results of Quantitative Trait Loci (QTL) mapping of natural variation in larval and pupal NaCl tolerance, and identify different genomic regions associated with NaCl tolerance during larval and pupal development. Copyright © 2016 Elsevier Ltd. All rights reserved.
Scale-dependent temporal variations in stream water geochemistry.
Nagorski, Sonia A; Moore, Iohnnie N; McKinnon, Temple E; Smith, David B
2003-03-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
Scale-dependent temporal variations in stream water geochemistry
Nagorski, S.A.; Moore, J.N.; McKinnon, Temple E.; Smith, D.B.
2003-01-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
Schmidt, Robert L; Factor, Rachel E; Affolter, Kajsa E; Cook, Joshua B; Hall, Brian J; Narra, Krishna K; Witt, Benjamin L; Wilson, Andrew R; Layfield, Lester J
2012-01-01
Diagnostic test accuracy (DTA) studies on fine-needle aspiration cytology (FNAC) often show considerable variability in diagnostic accuracy between study centers. Many factors affect the accuracy of FNAC. A complete description of the testing parameters would help make valid comparisons between studies and determine causes of performance variation. We investigated the manner in which test conditions are specified in FNAC DTA studies to determine which parameters are most commonly specified and the frequency with which they are specified and to see whether there is significant variability in reporting practice. We identified 17 frequently reported test parameters and found significant variation in the reporting of these test specifications across studies. On average, studies reported 5 of the 17 items that would be required to specify the test conditions completely. A more complete and standardized reporting of methods, perhaps by means of a checklist, would improve the interpretation of FNAC DTA studies.
Understanding Space Weather: The Sun as a Variable Star
NASA Technical Reports Server (NTRS)
Strong, Keith; Saba, Julia; Kucera, Therese
2011-01-01
The Sun is a complex system of systems and until recently, less than half of its surface was observable at any given time and then only from afar. New observational techniques and modeling capabilities are giving us a fresh perspective of the solar interior and how our Sun works as a variable star. This revolution in solar observations and modeling provides us with the exciting prospect of being able to use a vastly increased stream of solar data taken simultaneously from several different vantage points to produce more reliable and prompt space weather forecasts. Solar variations that cause identifiable space weather effects do not happen only on solar-cycle timescales from decades to centuries; there are also many shorter-term events that have their own unique space weather effects and a different set of challenges to understand and predict, such as flares, coronal mass ejections, and solar wind variations
Understanding Space Weather: The Sun as a Variable Star
NASA Technical Reports Server (NTRS)
Strong, Keith; Saba, Julia; Kucera, Therese
2012-01-01
The Sun is a complex system of systems and until recently, less than half of its surface was observable at any given time and then only from afar. New observational techniques and modeling capabilities are giving us a fresh perspective of the solar interior and how our Sun works as a variable star. This revolution in solar observations and modeling provides us with the exciting prospect of being able to use a vastly increased stream of solar data taken simultaneously from several different vantage points to produce more reliable and prompt space weather forecasts. Solar variations that cause identifiable space weather effects do not happen only on solar-cycle timescales from decades to centuries; there are also many shorter-term events that have their own unique space weather effects and a different set of challenges to understand and predict, such as flares, coronal mass ejections, and solar wind variations.