Sample records for parallel tempering monte

  1. Hyper-Parallel Tempering Monte Carlo Method and It's Applications

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; de Pablo, Juan

    2000-03-01

    A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.

  2. Application of the DMRG in two dimensions: a parallel tempering algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian

    The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).

  3. Parallel tempering simulation of the three-dimensional Edwards-Anderson model with compact asynchronous multispin coding on GPU

    NASA Astrophysics Data System (ADS)

    Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark

    2014-10-01

    Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.

  4. A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.

    PubMed

    Liang, Faming; Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.

  5. A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data

    PubMed Central

    Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469

  6. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030; Ji, Weixiao

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm,more » which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.« less

  7. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  8. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  9. Population annealing with weighted averages: A Monte Carlo method for rough free-energy landscapes

    NASA Astrophysics Data System (ADS)

    Machta, J.

    2010-08-01

    The population annealing algorithm introduced by Hukushima and Iba is described. Population annealing combines simulated annealing and Boltzmann weighted differential reproduction within a population of replicas to sample equilibrium states. Population annealing gives direct access to the free energy. It is shown that unbiased measurements of observables can be obtained by weighted averages over many runs with weight factors related to the free-energy estimate from the run. Population annealing is well suited to parallelization and may be a useful alternative to parallel tempering for systems with rough free-energy landscapes such as spin glasses. The method is demonstrated for spin glasses.

  10. Monte Carlo simulation of Hamaker nanospheres coated with dipolar particles

    NASA Astrophysics Data System (ADS)

    Meyra, Ariel G.; Zarragoicoechea, Guillermo J.; Kuz, Victor A.

    2012-01-01

    Parallel tempering Monte Carlo simulation is carried out in systems of N attractive Hamaker spheres dressed with n dipolar particles, able to move on the surface of the spheres. Different cluster configurations emerge for given values of the control parameters. Energy per sphere, pair distribution functions of spheres and dipoles as function of temperature, density, external electric field, and/or the angular orientation of dipoles are used to analyse the state of aggregation of the system. As a consequence of the non-central interaction, the model predicts complex structures like self-assembly of spheres by a double crown of dipoles. This interesting result could be of help in understanding some recent experiments in colloidal science and biology.

  11. Parallel tempering Monte Carlo simulations of lysozyme orientation on charged surfaces

    NASA Astrophysics Data System (ADS)

    Xie, Yun; Zhou, Jian; Jiang, Shaoyi

    2010-02-01

    In this work, the parallel tempering Monte Carlo (PTMC) algorithm is applied to accurately and efficiently identify the global-minimum-energy orientation of a protein adsorbed on a surface in a single simulation. When applying the PTMC method to simulate lysozyme orientation on charged surfaces, it is found that lysozyme could easily be adsorbed on negatively charged surfaces with "side-on" and "back-on" orientations. When driven by dominant electrostatic interactions, lysozyme tends to be adsorbed on negatively charged surfaces with the side-on orientation for which the active site of lysozyme faces sideways. The side-on orientation agrees well with the experimental results where the adsorbed orientation of lysozyme is determined by electrostatic interactions. As the contribution from van der Waals interactions gradually dominates, the back-on orientation becomes the preferred one. For this orientation, the active site of lysozyme faces outward, which conforms to the experimental results where the orientation of adsorbed lysozyme is co-determined by electrostatic interactions and van der Waals interactions. It is also found that despite of its net positive charge, lysozyme could be adsorbed on positively charged surfaces with both "end-on" and back-on orientations owing to the nonuniform charge distribution over lysozyme surface and the screening effect from ions in solution. The PTMC simulation method provides a way to determine the preferred orientation of proteins on surfaces for biosensor and biomaterial applications.

  12. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  13. A Hundred-Year-Old Experiment Re-evaluated: Accurate Ab-Initio Monte-Carlo Simulations of the Melting of Radon.

    PubMed

    Schwerdtfeger, Peter; Smits, Odile; Pahl, Elke; Jerabek, Paul

    2018-06-12

    State-of-the-art relativistic coupled-cluster theory is used to construct many-body potentials for the rare gas element radon in order to determine its bulk properties including the solid-to-liquid phase transition from parallel tempering Monte Carlo simulations through either direct sampling of the bulk or from a finite cluster approach. The calculated melting temperature are 201(3) K and 201(6) K from bulk simulations and from extrapolation of finite cluster values, respectively. This is in excellent agreement with the often debated (but widely cited) and only available value of 202 K, dating back to measurements by Gray and Ramsay in 1909. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  15. Parallel tempering for the traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, Allon; Wang, Richard; Hyman, Jeffrey

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less

  16. Population Annealing Monte Carlo for Frustrated Systems

    NASA Astrophysics Data System (ADS)

    Amey, Christopher; Machta, Jonathan

    Population annealing is a sequential Monte Carlo algorithm that efficiently simulates equilibrium systems with rough free energy landscapes such as spin glasses and glassy fluids. A large population of configurations is initially thermalized at high temperature and then cooled to low temperature according to an annealing schedule. The population is kept in thermal equilibrium at every annealing step via resampling configurations according to their Boltzmann weights. Population annealing is comparable to parallel tempering in terms of efficiency, but has several distinct and useful features. In this talk I will give an introduction to population annealing and present recent progress in understanding its equilibration properties and optimizing it for spin glasses. Results from large-scale population annealing simulations for the Ising spin glass in 3D and 4D will be presented. NSF Grant DMR-1507506.

  17. Ion-Stockmayer clusters: Minima, classical thermodynamics, and variational ground state estimates of Li+(CH3NO2)n (n = 1-20)

    NASA Astrophysics Data System (ADS)

    Curotto, E.

    2015-12-01

    Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li+(CH3NO2)n (n = 1-20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first "magic number" is identified using the adiabatic solvent dissociation energy, and it marks the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.

  18. Phase and vortex correlations in superconducting Josephson-junction arrays at irrational magnetic frustration.

    PubMed

    Granato, Enzo

    2008-07-11

    Phase coherence and vortex order in a Josephson-junction array at irrational frustration are studied by extensive Monte Carlo simulations using the parallel-tempering method. A scaling analysis of the correlation length of phase variables in the full equilibrated system shows that the critical temperature vanishes with a power-law divergent correlation length and critical exponent nuph, in agreement with recent results from resistivity scaling analysis. A similar scaling analysis for vortex variables reveals a different critical exponent nuv, suggesting that there are two distinct correlation lengths associated with a decoupled zero-temperature phase transition.

  19. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  20. Continuous Easy-Plane Deconfined Phase Transition on the Kagome Lattice

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Feng; He, Yin-Chen; Eggert, Sebastian; Moessner, Roderich; Pollmann, Frank

    2018-03-01

    We use large scale quantum Monte Carlo simulations to study an extended Hubbard model of hard core bosons on the kagome lattice. In the limit of strong nearest-neighbor interactions at 1 /3 filling, the interplay between frustration and quantum fluctuations leads to a valence bond solid ground state. The system undergoes a quantum phase transition to a superfluid phase as the interaction strength is decreased. It is still under debate whether the transition is weakly first order or represents an unconventional continuous phase transition. We present a theory in terms of an easy plane noncompact C P1 gauge theory describing the phase transition at 1 /3 filling. Utilizing large scale quantum Monte Carlo simulations with parallel tempering in the canonical ensemble up to 15552 spins, we provide evidence that the phase transition is continuous at exactly 1 /3 filling. A careful finite size scaling analysis reveals an unconventional scaling behavior hinting at deconfined quantum criticality.

  1. Ion-Stockmayer clusters: Minima, classical thermodynamics, and variational ground state estimates of Li{sup +}(CH{sub 3}NO{sub 2}){sub n} (n = 1–20)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curotto, E., E-mail: curotto@arcadia.edu

    2015-12-07

    Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li{sup +}(CH{sub 3}NO{sub 2}){sub n} (n = 1–20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first “magic number” is identified using the adiabatic solvent dissociation energy, and it marksmore » the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.« less

  2. Entropic stabilization of isolated beta-sheets.

    PubMed

    Dugourd, Philippe; Antoine, Rodolphe; Breaux, Gary; Broyer, Michel; Jarrold, Martin F

    2005-04-06

    Temperature-dependent electric deflection measurements have been performed for a series of unsolvated alanine-based peptides (Ac-WA(n)-NH(2), where Ac = acetyl, W = tryptophan, A = alanine, and n = 3, 5, 10, 13, and 15). The measurements are interpreted using Monte Carlo simulations performed with a parallel tempering algorithm. Despite alanine's high helix propensity in solution, the results suggest that unsolvated Ac-WA(n)-NH(2) peptides with n > 10 adopt beta-sheet conformations at room temperature. Previous studies have shown that protonated alanine-based peptides adopt helical or globular conformations in the gas phase, depending on the location of the charge. Thus, the charge more than anything else controls the structure.

  3. Intensive measurements of gas, water, and energy exchange between vegetation and troposphere during the MONTES Campaign in a vegetation gradient from short semi-desertic shrublands to tall wet temperate forests in the NW Mediterranean basin

    EPA Science Inventory

    MONTES (“Woodlands”) was a multidisciplinary international field campaign aimed at measuring energy, water and especially gas exchange between vegetation and atmosphere in a gradient from short semi-desertic shrublands to tall wet temperate forests in NE Spain in the North Wester...

  4. Merging parallel tempering with sequential geostatistical resampling for improved posterior exploration of high-dimensional subsurface categorical fields

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Linde, Niklas; Jacques, Diederik; Mariethoz, Grégoire

    2016-04-01

    The sequential geostatistical resampling (SGR) algorithm is a Markov chain Monte Carlo (MCMC) scheme for sampling from possibly non-Gaussian, complex spatially-distributed prior models such as geologic facies or categorical fields. In this work, we highlight the limits of standard SGR for posterior inference of high-dimensional categorical fields with realistically complex likelihood landscapes and benchmark a parallel tempering implementation (PT-SGR). Our proposed PT-SGR approach is demonstrated using synthetic (error corrupted) data from steady-state flow and transport experiments in categorical 7575- and 10,000-dimensional 2D conductivity fields. In both case studies, every SGR trial gets trapped in a local optima while PT-SGR maintains an higher diversity in the sampled model states. The advantage of PT-SGR is most apparent in an inverse transport problem where the posterior distribution is made bimodal by construction. PT-SGR then converges towards the appropriate data misfit much faster than SGR and partly recovers the two modes. In contrast, for the same computational resources SGR does not fit the data to the appropriate error level and hardly produces a locally optimal solution that looks visually similar to one of the two reference modes. Although PT-SGR clearly surpasses SGR in performance, our results also indicate that using a small number (16-24) of temperatures (and thus parallel cores) may not permit complete sampling of the posterior distribution by PT-SGR within a reasonable computational time (less than 1-2 weeks).

  5. Finite-size polyelectrolyte bundles at thermodynamic equilibrium

    NASA Astrophysics Data System (ADS)

    Sayar, M.; Holm, C.

    2007-01-01

    We present the results of extensive computer simulations performed on solutions of monodisperse charged rod-like polyelectrolytes in the presence of trivalent counterions. To overcome energy barriers we used a combination of parallel tempering and hybrid Monte Carlo techniques. Our results show that for small values of the electrostatic interaction the solution mostly consists of dispersed single rods. The potential of mean force between the polyelectrolyte monomers yields an attractive interaction at short distances. For a range of larger values of the Bjerrum length, we find finite-size polyelectrolyte bundles at thermodynamic equilibrium. Further increase of the Bjerrum length eventually leads to phase separation and precipitation. We discuss the origin of the observed thermodynamic stability of the finite-size aggregates.

  6. Vapor-liquid equilibrium and critical asymmetry of square well and short square well chain fluids.

    PubMed

    Li, Liyan; Sun, Fangfang; Chen, Zhitong; Wang, Long; Cai, Jun

    2014-08-07

    The critical behavior of square well fluids with variable interaction ranges and of short square well chain fluids have been investigated by grand canonical ensemble Monte Carlo simulations. The critical temperatures and densities were estimated by a finite-size scaling analysis with the help of histogram reweighting technique. The vapor-liquid coexistence curve in the near-critical region was determined using hyper-parallel tempering Monte Carlo simulations. The simulation results for coexistence diameters show that the contribution of |t|(1-α) to the coexistence diameter dominates the singular behavior in all systems investigated. The contribution of |t|(2β) to the coexistence diameter is larger for the system with a smaller interaction range λ. While for short square well chain fluids, longer the chain length, larger the contribution of |t|(2β). The molecular configuration greatly influences the critical asymmetry: a short soft chain fluid shows weaker critical asymmetry than a stiff chain fluid with same chain length.

  7. Predicting stability limits for pure and doped dicationic noble gas clusters undergoing coulomb explosion: A parallel tempering based study.

    PubMed

    Ghorai, Sankar; Chaudhury, Pinaki

    2018-05-30

    We have used a replica exchange Monte-Carlo procedure, popularly known as Parallel Tempering, to study the problem of Coulomb explosion in homogeneous Ar and Xe dicationic clusters as well as mixed Ar-Xe dicationic clusters of varying sizes with different degrees of relative composition. All the clusters studied have two units of positive charges. The simulations reveal that in all the cases there is a cutoff size below which the clusters fragment. It is seen that for the case of pure Ar, the value is around 95 while that for Xe it is 55. For the mixed clusters with increasing Xe content, the cutoff limit for suppression of Coulomb explosion gradually decreases from 95 for a pure Ar to 55 for a pure Xe cluster. The hallmark of this study is this smooth progression. All the clusters are simulated using the reliable potential energy surface developed by Gay and Berne (Gay and Berne, Phys. Rev. Lett. 1982, 49, 194). For the hetero clusters, we have also discussed two different ways of charge distribution, that is one in which both positive charges are on two Xe atoms and the other where the two charges are at a Xe atom and at an Ar atom. The fragmentation patterns observed by us are such that single ionic ejections are the favored dissociating pattern. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Free energy landscape from path-sampling: application to the structural transition in LJ38

    NASA Astrophysics Data System (ADS)

    Adjanor, G.; Athènes, M.; Calvo, F.

    2006-09-01

    We introduce a path-sampling scheme that allows equilibrium state-ensemble averages to be computed by means of a biased distribution of non-equilibrium paths. This non-equilibrium method is applied to the case of the 38-atom Lennard-Jones atomic cluster, which has a double-funnel energy landscape. We calculate the free energy profile along the Q4 bond orientational order parameter. At high or moderate temperature the results obtained using the non-equilibrium approach are consistent with those obtained using conventional equilibrium methods, including parallel tempering and Wang-Landau Monte Carlo simulations. At lower temperatures, the non-equilibrium approach becomes more efficient in exploring the relevant inherent structures. In particular, the free energy agrees with the predictions of the harmonic superposition approximation.

  9. Enhanced Sampling in the Well-Tempered Ensemble

    NASA Astrophysics Data System (ADS)

    Bonomi, M.; Parrinello, M.

    2010-05-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  10. Enhanced sampling in the well-tempered ensemble.

    PubMed

    Bonomi, M; Parrinello, M

    2010-05-14

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  11. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  12. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  13. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  14. Abinitio powder x-ray diffraction and PIXEL energy calculations on thiophene derived 1,4 dihydropyridine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthikeyan, N., E-mail: karthin10@gmail.com; Sivakumar, K.; Pachamuthu, M. P.

    We focus on the application of powder diffraction data to get abinitio crystal structure determination of thiophene derived 1,4 DHP prepared by cyclocondensation method using solid catalyst. Crystal structure of the compound has been solved by direct-space approach on Monte Carlo search in parallel tempering mode using FOX program. Initial atomic coordinates were derived using Gaussian 09W quantum chemistry software in semi-empirical approach and Rietveld refinement was carried out using GSAS program. The crystal structure of the compound is stabilized by one N-H…O and three C-H…O hydrogen bonds. PIXEL lattice energy calculation was carried out to understand the physical naturemore » of intermolecular interactions in the crystal packing, on which the total lattice energy is contributed into Columbic, polarization, dispersion, and repulsion energies.« less

  15. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.; Brandão, Fernando; Preskill, John; Svore, Krysta M.

    2018-05-01

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p3DCC (1 )≃1.9 % and p3DCC (2 )≃27.6 % . We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  16. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  17. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping.

    PubMed

    Kubica, Aleksander; Beverland, Michael E; Brandão, Fernando; Preskill, John; Svore, Krysta M

    2018-05-04

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p_{3DCC}^{(1)}≃1.9% and p_{3DCC}^{(2)}≃27.6%. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  18. Fast-NPS-A Markov Chain Monte Carlo-based analysis tool to obtain structural information from single-molecule FRET measurements

    NASA Astrophysics Data System (ADS)

    Eilert, Tobias; Beckers, Maximilian; Drechsler, Florian; Michaelis, Jens

    2017-10-01

    The analysis tool and software package Fast-NPS can be used to analyse smFRET data to obtain quantitative structural information about macromolecules in their natural environment. In the algorithm a Bayesian model gives rise to a multivariate probability distribution describing the uncertainty of the structure determination. Since Fast-NPS aims to be an easy-to-use general-purpose analysis tool for a large variety of smFRET networks, we established an MCMC based sampling engine that approximates the target distribution and requires no parameter specification by the user at all. For an efficient local exploration we automatically adapt the multivariate proposal kernel according to the shape of the target distribution. In order to handle multimodality, the sampler is equipped with a parallel tempering scheme that is fully adaptive with respect to temperature spacing and number of chains. Since the molecular surrounding of a dye molecule affects its spatial mobility and thus the smFRET efficiency, we introduce dye models which can be selected for every dye molecule individually. These models allow the user to represent the smFRET network in great detail leading to an increased localisation precision. Finally, a tool to validate the chosen model combination is provided. Programme Files doi:http://dx.doi.org/10.17632/7ztzj63r68.1 Licencing provisions: Apache-2.0 Programming language: GUI in MATLAB (The MathWorks) and the core sampling engine in C++ Nature of problem: Sampling of highly diverse multivariate probability distributions in order to solve for macromolecular structures from smFRET data. Solution method: MCMC algorithm with fully adaptive proposal kernel and parallel tempering scheme.

  19. Dynamic and thermodynamic crossover scenarios in the Kob-Andersen mixture: Insights from multi-CPU and multi-GPU simulations.

    PubMed

    Coslovich, Daniele; Ozawa, Misaki; Kob, Walter

    2018-05-17

    The physical behavior of glass-forming liquids presents complex features of both dynamic and thermodynamic nature. Some studies indicate the presence of thermodynamic anomalies and of crossovers in the dynamic properties, but their origin and degree of universality is difficult to assess. Moreover, conventional simulations are barely able to cover the range of temperatures at which these crossovers usually occur. To address these issues, we simulate the Kob-Andersen Lennard-Jones mixture using efficient protocols based on multi-CPU and multi-GPU parallel tempering. Our setup enables us to probe the thermodynamics and dynamics of the liquid at equilibrium well below the critical temperature of the mode-coupling theory, [Formula: see text]. We find that below [Formula: see text] the analysis is hampered by partial crystallization of the metastable liquid, which nucleates extended regions populated by large particles arranged in an fcc structure. By filtering out crystalline samples, we reveal that the specific heat grows in a regular manner down to [Formula: see text] . Possible thermodynamic anomalies suggested by previous studies can thus occur only in a region of the phase diagram where the system is highly metastable. Using the equilibrium configurations obtained from the parallel tempering simulations, we perform molecular dynamics and Monte Carlo simulations to probe the equilibrium dynamics down to [Formula: see text]. A temperature-derivative analysis of the relaxation time and diffusion data allows us to assess different dynamic scenarios around [Formula: see text]. Hints of a dynamic crossover come from analysis of the four-point dynamic susceptibility. Finally, we discuss possible future numerical strategies to clarify the nature of crossover phenomena in glass-forming liquids.

  20. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  1. Replica exchange and expanded ensemble simulations as Gibbs sampling: simple improvements for enhanced mixing.

    PubMed

    Chodera, John D; Shirts, Michael R

    2011-11-21

    The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.

  2. Scalable Domain Decomposed Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  3. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  4. Aggregation of peptides in the tube model with correlated sidechain orientations

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Ba; Hoang, Trinh Xuan

    2015-06-01

    The ability of proteins and peptides to aggregate and form toxic amyloid fibrils is associated with a range of diseases including BSE (or mad cow), Alzheimer's and Parkinson's Diseases. In this study, we investigate the the role of amino acid sequence in the aggregation propensity by using a modified tube model with a new procedure for hydrophobic interaction. In this model, the amino acid sidechains are not considered explicitly, but their orientations are taken into account in the formation of hydrophobic contact. Extensive Monte Carlo simulations for systems of short peptides are carried out with the use of parallel tempering technique. Our results show that the propensity to form and the structures of the aggregates strongly depend on the amino acid sequence and the number of peptides. Some sequences may not aggregate at all at a presumable physiological temperature while other can easily form fibril-like, β-sheet struture. Our study provides an insight into the principles of how the formation of amyloid can be governed by amino acid sequence.

  5. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  6. Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School

    NASA Astrophysics Data System (ADS)

    Lafage, Vincent

    2017-11-01

    Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.

  7. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    PubMed

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  8. Multisystem altruistic metadynamics—Well-tempered variant

    NASA Astrophysics Data System (ADS)

    Hošek, Petr; Kříž, Pavel; Toulcová, Daniela; Spiwok, Vojtěch

    2017-03-01

    Metadynamics method has been widely used to enhance sampling in molecular simulations. Its original form suffers two major drawbacks, poor convergence in complex (especially biomolecular) systems and its serial nature. The first drawback has been addressed by introduction of a convergent variant known as well-tempered metadynamics. The second was addressed by introduction of a parallel multisystem metadynamics referred to as altruistic metadynamics. Here, we combine both approaches into well-tempered altruistic metadynamics. We provide mathematical arguments and trial simulations to show that it accurately predicts free energy surfaces.

  9. Multisystem altruistic metadynamics-Well-tempered variant.

    PubMed

    Hošek, Petr; Kříž, Pavel; Toulcová, Daniela; Spiwok, Vojtěch

    2017-03-28

    Metadynamics method has been widely used to enhance sampling in molecular simulations. Its original form suffers two major drawbacks, poor convergence in complex (especially biomolecular) systems and its serial nature. The first drawback has been addressed by introduction of a convergent variant known as well-tempered metadynamics. The second was addressed by introduction of a parallel multisystem metadynamics referred to as altruistic metadynamics. Here, we combine both approaches into well-tempered altruistic metadynamics. We provide mathematical arguments and trial simulations to show that it accurately predicts free energy surfaces.

  10. A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport

    NASA Astrophysics Data System (ADS)

    Robinson, P. B.; Peterson, J. D. L.

    2005-12-01

    The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48

  11. Monte Carlo modelling the dosimetric effects of electrode material on diamond detectors.

    PubMed

    Baluti, Florentina; Deloar, Hossain M; Lansley, Stuart P; Meyer, Juergen

    2015-03-01

    Diamond detectors for radiation dosimetry were modelled using the EGSnrc Monte Carlo code to investigate the influence of electrode material and detector orientation on the absorbed dose. The small dimensions of the electrode/diamond/electrode detector structure required very thin voxels and the use of non-standard DOSXYZnrc Monte Carlo model parameters. The interface phenomena was investigated by simulating a 6 MV beam and detectors with different electrode materials, namely Al, Ag, Cu and Au, with thickens of 0.1 µm for the electrodes and 0.1 mm for the diamond, in both perpendicular and parallel detector orientation with regards to the incident beam. The smallest perturbations were observed for the parallel detector orientation and Al electrodes (Z = 13). In summary, EGSnrc Monte Carlo code is well suited for modelling small detector geometries. The Monte Carlo model developed is a useful tool to investigate the dosimetric effects caused by different electrode materials. To minimise perturbations cause by the detector electrodes, it is recommended that the electrodes should be made from a low-atomic number material and placed parallel to the beam direction.

  12. Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems

    NASA Astrophysics Data System (ADS)

    Endo, Eishin; Toga, Yuta; Sasaki, Munetaka

    2015-07-01

    We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.

  13. Study of the temperature configuration of parallel tempering for the traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu

    The effective temperature configuration of parallel tempering (PT) in finite-time optimization is studied for the solution of the traveling salesman problem. An experimental analysis is conducted to decide the relative importance of the two characteristic temperatures, the specific-heat-peak temperature referred to in the general guidelines and the effective intermediate temperature identified in the recent study on simulated annealing (SA). The results show that the operation near the former has no notable significance contrary to the conventional belief but that the operation near the latter plays a crucial role in fulfilling the optimization function of PT. The method shares the same origin of effectiveness with the SA and SA-related algorithms.

  14. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  15. WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, Ryan M.; Rowland, Kelly L.

    2017-04-12

    WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less

  16. Recent advances and future prospects for Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less

  17. Replica exchange with solute tempering: A method for sampling biological systems in explicit water

    NASA Astrophysics Data System (ADS)

    Liu, Pu; Kim, Byungchan; Friesner, Richard A.; Berne, B. J.

    2005-09-01

    An innovative replica exchange (parallel tempering) method called replica exchange with solute tempering (REST) for the efficient sampling of aqueous protein solutions is presented here. The method bypasses the poor scaling with system size of standard replica exchange and thus reduces the number of replicas (parallel processes) that must be used. This reduction is accomplished by deforming the Hamiltonian function for each replica in such a way that the acceptance probability for the exchange of replica configurations does not depend on the number of explicit water molecules in the system. For proof of concept, REST is compared with standard replica exchange for an alanine dipeptide molecule in water. The comparisons confirm that REST greatly reduces the number of CPUs required by regular replica exchange and increases the sampling efficiency. This method reduces the CPU time required for calculating thermodynamic averages and for the ab initio folding of proteins in explicit water. Author contributions: B.J.B. designed research; P.L. and B.K. performed research; P.L. and B.K. analyzed data; and P.L., B.K., R.A.F., and B.J.B. wrote the paper.Abbreviations: REST, replica exchange with solute tempering; REM, replica exchange method; MD, molecular dynamics.*P.L. and B.K. contributed equally to this work.

  18. Molecular Simulation of the Phase Diagram of Methane Hydrate: Free Energy Calculations, Direct Coexistence Method, and Hyperparallel Tempering.

    PubMed

    Jin, Dongliang; Coasne, Benoit

    2017-10-24

    Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule.

  19. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  20. Efficient Simulation of Explicitly Solvated Proteins in the Well-Tempered Ensemble.

    PubMed

    Deighan, Michael; Bonomi, Massimiliano; Pfaendtner, Jim

    2012-07-10

    Herein, we report significant reduction in the cost of combined parallel tempering and metadynamics simulations (PTMetaD). The efficiency boost is achieved using the recently proposed well-tempered ensemble (WTE) algorithm. We studied the convergence of PTMetaD-WTE conformational sampling and free energy reconstruction of an explicitly solvated 20-residue tryptophan-cage protein (trp-cage). A set of PTMetaD-WTE simulations was compared to a corresponding standard PTMetaD simulation. The properties of PTMetaD-WTE and the convergence of the calculations were compared. The roles of the number of replicas, total simulation time, and adjustable WTE parameter γ were studied.

  1. Bayesian tomography by interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Romary, T.

    2017-12-01

    In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.

  2. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    NASA Astrophysics Data System (ADS)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  3. Evaluating long-term cumulative hydrologic effects of forest management: a conceptual approach

    Treesearch

    Robert R. Ziemer

    1992-01-01

    It is impractical to address experimentally many aspects of cumulative hydrologic effects, since to do so would require studying large watersheds for a century or more. Monte Carlo simulations were conducted using three hypothetical 10,000-ha fifth-order forested watersheds. Most of the physical processes expressed by the model are transferable from temperate to...

  4. SUPREM-DSMC: A New Scalable, Parallel, Reacting, Multidimensional Direct Simulation Monte Carlo Flow Code

    NASA Technical Reports Server (NTRS)

    Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas

    2000-01-01

    An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.

  5. Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong

    2018-06-01

    This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.

  6. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  7. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  8. Portable multi-node LQCD Monte Carlo simulations using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Calore, Enrico; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.

  9. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  10. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  11. A Large Deviations Analysis of Certain Qualitative Properties of Parallel Tempering and Infinite Swapping Algorithms

    DOE PAGES

    Doll, J.; Dupuis, P.; Nyquist, P.

    2017-02-08

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less

  12. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  13. Electrostatics-mediated α-chymotrypsin inhibition by functionalized single-walled carbon nanotubes.

    PubMed

    Zhao, Daohui; Zhou, Jian

    2017-01-04

    The α-chymotrypsin (α-ChT) enzyme is extensively used for studying nanomaterial-induced enzymatic activity inhibition. A recent experimental study reported that carboxylized carbon nanotubes (CNTs) played an important role in regulating the α-ChT activity. In this study, parallel tempering Monte Carlo and molecular dynamics simulations were combined to elucidate the interactions between α-ChT and CNTs in relation to the CNT functional group density. The simulation results indicate that the adsorption and the driving force of α-ChT on different CNTs are contingent on the carboxyl density. Meanwhile, minor secondary structural changes are observed in adsorption processes. It is revealed that α-ChT interacts with pristine CNTs through hydrophobic forces and exhibits a non-competitive characteristic with the active site facing towards the solution; while it binds to carboxylized CNTs with the active pocket through a dominant electrostatic association, which causes enzymatic activity inhibition in a competitive-like mode. These findings are in line with experimental results, and well interpret the activity inhibition of α-ChT at the molecular level. Moreover, this study would shed light on the detailed mechanism of specific recognition and regulation of α-ChT by other functionalized nanomaterials.

  14. Bayesian inference on EMRI signals using low frequency approximations

    NASA Astrophysics Data System (ADS)

    Ali, Asad; Christensen, Nelson; Meyer, Renate; Röver, Christian

    2012-07-01

    Extreme mass ratio inspirals (EMRIs) are thought to be one of the most exciting gravitational wave sources to be detected with LISA. Due to their complicated nature and weak amplitudes the detection and parameter estimation of such sources is a challenging task. In this paper we present a statistical methodology based on Bayesian inference in which the estimation of parameters is carried out by advanced Markov chain Monte Carlo (MCMC) algorithms such as parallel tempering MCMC. We analysed high and medium mass EMRI systems that fall well inside the low frequency range of LISA. In the context of the Mock LISA Data Challenges, our investigation and results are also the first instance in which a fully Markovian algorithm is applied for EMRI searches. Results show that our algorithm worked well in recovering EMRI signals from different (simulated) LISA data sets having single and multiple EMRI sources and holds great promise for posterior computation under more realistic conditions. The search and estimation methods presented in this paper are general in their nature, and can be applied in any other scenario such as AdLIGO, AdVIRGO and Einstein Telescope with their respective response functions.

  15. Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu

    2012-10-01

    We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less

  16. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  17. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  18. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  19. A hybrid parallel framework for the cellular Potts model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yi; He, Kejing; Dong, Shoubin

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less

  20. Performance analysis of a parallel Monte Carlo code for simulating solar radiative transfer in cloudy atmospheres using CUDA-enabled NVIDIA GPU

    NASA Astrophysics Data System (ADS)

    Russkova, Tatiana V.

    2017-11-01

    One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.

  1. Massively parallel multicanonical simulations

    NASA Astrophysics Data System (ADS)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  2. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  3. Monte Carlo: in the beginning and some great expectations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metropolis, N.

    1985-01-01

    The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conferencemore » was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.« less

  4. Calculating Potential Energy Curves with Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2014-06-01

    Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).

  5. Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?

    PubMed

    Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend

    2011-10-11

    In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.

  6. Effect of Aspergillus niger xylanase on dough characteristics and bread quality attributes.

    PubMed

    Ahmad, Zulfiqar; Butt, Masood Sadiq; Ahmed, Anwaar; Riaz, Muhammad; Sabir, Syed Mubashar; Farooq, Umar; Rehman, Fazal Ur

    2014-10-01

    The present study was conducted to investigate the impact of various treatments of xylanase produced by Aspergillus niger applied in bread making processes like during tempering of wheat kernels and dough mixing on the dough quality characteristics i.e. dryness, stiffness, elasticity, extensibility, coherency and bread quality parameters i.e. volume, specific volume, density, moisture retention and sensory attributes. Different doses (200, 400, 600, 800 and 1,000 IU) of purified enzyme were applied to 1 kg of wheat grains during tempering and 1 kg of flour (straight grade flour) during mixing of dough in parallel. The samples of wheat kernels were agitated at different intervals for uniformity in tempering. After milling and dough making of both types of flour (having enzyme treatment during tempering and flour mixing) showed improved dough characteristics but the improvement was more prominent in the samples receiving enzyme treatment during tempering. Moreover, xylanase decreased dryness and stiffness of the dough whereas, resulted in increased elasticity, extensibility and coherency and increase in volume & decrease in bread density. Xylanase treatments also resulted in higher moisture retention and improvement of sensory attributes of bread. From the results, it is concluded that dough characteristics and bread quality improved significantly in response to enzyme treatments during tempering as compared to application during mixing.

  7. A Proposed Solution to the Problem with Using Completely Random Data to Assess the Number of Factors with Parallel Analysis

    ERIC Educational Resources Information Center

    Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo

    2012-01-01

    A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…

  8. A Gene-Oriented Haplotype Comparison Reveals Recently Selected Genomic Regions in Temperate and Tropical Maize Germplasm

    PubMed Central

    Zhang, Jie; Li, Yongxiang; Zheng, Jun; Zhang, Hongwei; Yang, Xiaohong; Wang, Jianhua; Wang, Guoying

    2017-01-01

    The extensive genetic variation present in maize (Zea mays) germplasm makes it possible to detect signatures of positive artificial selection that occurred during temperate and tropical maize improvement. Here we report an analysis of 532,815 polymorphisms from a maize association panel consisting of 368 diverse temperate and tropical inbred lines. We developed a gene-oriented approach adapting exonic polymorphisms to identify recently selected alleles by comparing haplotypes across the maize genome. This analysis revealed evidence of selection for more than 1100 genomic regions during recent improvement, and included regulatory genes and key genes with visible mutant phenotypes. We find that selected candidate target genes in temperate maize are enriched in biosynthetic processes, and further examination of these candidates highlights two cases, sucrose flux and oil storage, in which multiple genes in a common pathway can be cooperatively selected. Finally, based on available parallel gene expression data, we hypothesize that some genes were selected for regulatory variations, resulting in altered gene expression. PMID:28099470

  9. MCBooster: a library for fast Monte Carlo generation of phase-space decays on massively parallel platforms.

    NASA Astrophysics Data System (ADS)

    Alves Júnior, A. A.; Sokoloff, M. D.

    2017-10-01

    MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments

  10. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.

    2016-01-01

    This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.

  11. Distance between configurations in Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Fukuma, Masafumi; Matsumoto, Nobuyuki; Umeda, Naoya

    2017-12-01

    For a given Markov chain Monte Carlo algorithm we introduce a distance between two configurations that quantifies the difficulty of transition from one configuration to the other configuration. We argue that the distance takes a universal form for the class of algorithms which generate local moves in the configuration space. We explicitly calculate the distance for the Langevin algorithm, and show that it certainly has desired and expected properties as distance. We further show that the distance for a multimodal distribution gets dramatically reduced from a large value by the introduction of a tempering method. We also argue that, when the original distribution is highly multimodal with large number of degenerate vacua, an anti-de Sitter-like geometry naturally emerges in the extended configuration space.

  12. Uncertainty quantification of seabed parameters for large data volumes along survey tracks with a tempered particle filter

    NASA Astrophysics Data System (ADS)

    Dettmer, J.; Quijano, J. E.; Dosso, S. E.; Holland, C. W.; Mandolesi, E.

    2016-12-01

    Geophysical seabed properties are important for the detection and classification of unexploded ordnance. However, current surveying methods such as vertical seismic profiling, coring, or inversion are of limited use when surveying large areas with high spatial sampling density. We consider surveys based on a source and receiver array towed by an autonomous vehicle which produce large volumes of seabed reflectivity data that contain unprecedented and detailed seabed information. The data are analyzed with a particle filter, which requires efficient reflection-coefficient computation, efficient inversion algorithms and efficient use of computer resources. The filter quantifies information content of multiple sequential data sets by considering results from previous data along the survey track to inform the importance sampling at the current point. Challenges arise from environmental changes along the track where the number of sediment layers and their properties change. This is addressed by a trans-dimensional model in the filter which allows layering complexity to change along a track. Efficiency is improved by likelihood tempering of various particle subsets and including exchange moves (parallel tempering). The filter is implemented on a hybrid computer that combines central processing units (CPUs) and graphics processing units (GPUs) to exploit three levels of parallelism: (1) fine-grained parallel computation of spherical reflection coefficients with a GPU implementation of Levin integration; (2) updating particles by concurrent CPU processes which exchange information using automatic load balancing (coarse grained parallelism); (3) overlapping CPU-GPU communication (a major bottleneck) with GPU computation by staggering CPU access to the multiple GPUs. The algorithm is applied to spherical reflection coefficients for data sets along a 14-km track on the Malta Plateau, Mediterranean Sea. We demonstrate substantial efficiency gains over previous methods. [This research was supported in part by the U.S. Dept of Defense, thought the Strategic Environmental Research and Development Program (SERDP).

  13. Microstructure and Mechanical Properties of Laser Clad and Post-cladding Tempered AISI H13 Tool Steel

    NASA Astrophysics Data System (ADS)

    Telasang, Gururaj; Dutta Majumdar, Jyotsna; Wasekar, Nitin; Padmanabham, G.; Manna, Indranil

    2015-05-01

    This study reports a detailed investigation of the microstructure and mechanical properties (wear resistance and tensile strength) of hardened and tempered AISI H13 tool steel substrate following laser cladding with AISI H13 tool steel powder in as-clad and after post-cladding conventional bulk isothermal tempering [at 823 K (550 °C) for 2 hours] heat treatment. Laser cladding was carried out on AISI H13 tool steel substrate using a 6 kW continuous wave diode laser coupled with fiber delivering an energy density of 133 J/mm2 and equipped with a co-axial powder feeding nozzle capable of feeding powder at the rate of 13.3 × 10-3 g/mm2. Laser clad zone comprises martensite, retained austenite, and carbides, and measures an average hardness of 600 to 650 VHN. Subsequent isothermal tempering converted the microstructure into one with tempered martensite and uniform dispersion of carbides with a hardness of 550 to 650 VHN. Interestingly, laser cladding introduced residual compressive stress of 670 ± 15 MPa, which reduces to 580 ± 20 MPa following isothermal tempering. Micro-tensile testing with specimens machined from the clad zone across or transverse to cladding direction showed high strength but failure in brittle mode. On the other hand, similar testing with samples sectioned from the clad zone parallel or longitudinal to the direction of laser cladding prior to and after post-cladding tempering recorded lower strength but ductile failure with 4.7 and 8 pct elongation, respectively. Wear resistance of the laser surface clad and post-cladding tempered samples (evaluated by fretting wear testing) registered superior performance as compared to that of conventional hardened and tempered AISI H13 tool steel.

  14. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  15. Linking Well-Tempered Metadynamics Simulations with Experiments

    PubMed Central

    Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele

    2010-01-01

    Abstract Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. PMID:20441734

  16. Low frequency full waveform seismic inversion within a tree based Bayesian framework

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Kaplan, Sam; Washbourne, John; Albertin, Uwe

    2018-01-01

    Limited illumination, insufficient offset, noisy data and poor starting models can pose challenges for seismic full waveform inversion. We present an application of a tree based Bayesian inversion scheme which attempts to mitigate these problems by accounting for data uncertainty while using a mildly informative prior about subsurface structure. We sample the resulting posterior model distribution of compressional velocity using a trans-dimensional (trans-D) or Reversible Jump Markov chain Monte Carlo method in the wavelet transform domain of velocity. This allows us to attain rapid convergence to a stationary distribution of posterior models while requiring a limited number of wavelet coefficients to define a sampled model. Two synthetic, low frequency, noisy data examples are provided. The first example is a simple reflection + transmission inverse problem, and the second uses a scaled version of the Marmousi velocity model, dominated by reflections. Both examples are initially started from a semi-infinite half-space with incorrect background velocity. We find that the trans-D tree based approach together with parallel tempering for navigating rugged likelihood (i.e. misfit) topography provides a promising, easily generalized method for solving large-scale geophysical inverse problems which are difficult to optimize, but where the true model contains a hierarchy of features at multiple scales.

  17. Molecular simulation study of feruloyl esterase adsorption on charged surfaces: effects of surface charge density and ionic strength.

    PubMed

    Liu, Jie; Peng, Chunwang; Yu, Gaobo; Zhou, Jian

    2015-10-06

    The surrounding conditions, such as surface charge density and ionic strength, play an important role in enzyme adsorption. The adsorption of a nonmodular type-A feruloyl esterase from Aspergillus niger (AnFaeA) on charged surfaces was investigated by parallel tempering Monte Carlo (PTMC) and all-atom molecular dynamics (AAMD) simulations at different surface charge densities (±0.05 and ±0.16 C·m(-2)) and ionic strengths (0.007 and 0.154 M). The adsorption energy, orientation, and conformational changes were analyzed. Simulation results show that whether AnFaeA can adsorb onto a charged surface is mainly controlled by electrostatic interactions between AnFaeA and the charged surface. The electrostatic interactions between AnFaeA and charged surfaces are weakened when the ionic strength increases. The positively charged surface at low surface charge density and high ionic strength conditions can maximize the utilization of the immobilized AnFaeA. The counterion layer plays a key role in the adsorption of AnFaeA on the negatively charged COOH-SAM. The native conformation of AnFaeA is well preserved under all of these conditions. The results of this work can be used for the controlled immobilization of AnFaeA.

  18. Free energy landscape of protein-like chains with discontinuous potentials

    NASA Astrophysics Data System (ADS)

    Movahed, Hanif Bayat; van Zon, Ramses; Schofield, Jeremy

    2012-06-01

    In this article the configurational space of two simple protein models consisting of polymers composed of a periodic sequence of four different kinds of monomers is studied as a function of temperature. In the protein models, hydrogen bond interactions, electrostatic repulsion, and covalent bond vibrations are modeled by discontinuous step, shoulder, and square-well potentials, respectively. The protein-like chains exhibit a secondary alpha helix structure in their folded states at low temperatures, and allow a natural definition of a configuration by considering which beads are bonded. Free energies and entropies of configurations are computed using the parallel tempering method in combination with hybrid Monte Carlo sampling of the canonical ensemble of the discontinuous potential system. The probability of observing the most common configuration is used to analyze the nature of the free energy landscape, and it is found that the model with the least number of possible bonds exhibits a funnel-like free energy landscape at low enough temperature for chains with fewer than 30 beads. For longer proteins, the free landscape consists of several minima, where the configuration with the lowest free energy changes significantly by lowering the temperature and the probability of observing the most common configuration never approaches one due to the degeneracy of the lowest accessible potential energy.

  19. Morphology and properties of low-carbon bainite

    NASA Astrophysics Data System (ADS)

    Ohtani, H.; Okaguchi, S.; Fujishiro, Y.; Ohmori, Y.

    1990-03-01

    Morphology of low-carbon bainite in commercial-grade high-tensile-strength steels in both isothermal transformation and continuous cooling transformation is lathlike ferrite elongated in the <11l>b direction. Based on carbide distribution, three types of bainites are classified: Type I, is carbide-free, Type II has fine carbide platelets lying between laths, and Type III has carbides parallel to a specific ferrite plane. At the initial stage of transformation, upper bainitic ferrite forms a subunit elongated in the [-101]f which is nearly parallel to the [lll]b direction with the cross section a parallelogram shape. Coalescence of the subunit yields the lathlike bainite with the [-101]f growth direction and the habit plane between (232)f and (lll)f. Cementite particles precipitate on the sidewise growth tips of the Type II bainitic ferrite subunit. This results in the cementite platelet aligning parallel to a specific ferrite plane in the laths after coalescence. These morphologies of bainites are the same in various kinds of low-carbon high-strength steels. The lowest brittle-ductile transition temperature and the highest strength were obtained either by Type III bainite or bainite/martensite duplex structure because of the crack path limited by fine unit microstructure. It should also be noted that the tempered duplex structure has higher strength than the tempered martensite in the tempering temperature range between 200 °C and 500 °C. In the case of controlled rolling, the accelerated cooling afterward produces a complex structure comprised of ferrite, cementite, and martensite as well as BI-type bainite. Type I bainite in this structure is refined by controlled rolling and plays a very important role in improving the strength and toughness of low-carbon steels.

  20. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    PubMed Central

    Pratx, Guillem; Xing, Lei

    2011-01-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  1. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    DOE PAGES

    Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...

    2015-12-29

    In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  2. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  3. Essential slow degrees of freedom in protein-surface simulations: A metadynamics investigation.

    PubMed

    Prakash, Arushi; Sprenger, K G; Pfaendtner, Jim

    2018-03-29

    Many proteins exhibit strong binding affinities to surfaces, with binding energies much greater than thermal fluctuations. When modelling these protein-surface systems with classical molecular dynamics (MD) simulations, the large forces that exist at the protein/surface interface generally confine the system to a single free energy minimum. Exploring the full conformational space of the protein, especially finding other stable structures, becomes prohibitively expensive. Coupling MD simulations with metadynamics (enhanced sampling) has fast become a common method for sampling the adsorption of such proteins. In this paper, we compare three different flavors of metadynamics, specifically well-tempered, parallel-bias, and parallel-tempering in the well-tempered ensemble, to exhaustively sample the conformational surface-binding landscape of model peptide GGKGG. We investigate the effect of mobile ions and ion charge, as well as the choice of collective variable (CV), on the binding free energy of the peptide. We make the case for explicitly biasing ions to sample the true binding free energy of biomolecules when the ion concentration is high and the binding free energies of the solute and ions are similar. We also make the case for choosing CVs that apply bias to all atoms of the solute to speed up calculations and obtain the maximum possible amount of information about the system. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  5. Linking well-tempered metadynamics simulations with experiments.

    PubMed

    Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele

    2010-05-19

    Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Parallel distributed, reciprocal Monte Carlo radiation in coupled, large eddy combustion simulations

    NASA Astrophysics Data System (ADS)

    Hunsaker, Isaac L.

    Radiation is the dominant mode of heat transfer in high temperature combustion environments. Radiative heat transfer affects the gas and particle phases, including all the associated combustion chemistry. The radiative properties are in turn affected by the turbulent flow field. This bi-directional coupling of radiation turbulence interactions poses a major challenge in creating parallel-capable, high-fidelity combustion simulations. In this work, a new model was developed in which reciprocal monte carlo radiation was coupled with a turbulent, large-eddy simulation combustion model. A technique wherein domain patches are stitched together was implemented to allow for scalable parallelism. The combustion model runs in parallel on a decomposed domain. The radiation model runs in parallel on a recomposed domain. The recomposed domain is stored on each processor after information sharing of the decomposed domain is handled via the message passing interface. Verification and validation testing of the new radiation model were favorable. Strong scaling analyses were performed on the Ember cluster and the Titan cluster for the CPU-radiation model and GPU-radiation model, respectively. The model demonstrated strong scaling to over 1,700 and 16,000 processing cores on Ember and Titan, respectively.

  7. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  8. Parallel Monte Carlo Search for Hough Transform

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  9. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2017-01-01

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.

  10. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  11. OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials

    NASA Astrophysics Data System (ADS)

    Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu

    The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.

  12. The Bossons glacier protects Europe's summit from erosion

    NASA Astrophysics Data System (ADS)

    Godon, C.; Mugnier, J. L.; Fallourd, R.; Paquette, J. L.; Pohl, A.; Buoncristiani, J. F.

    2013-08-01

    The contrasting efficiency of erosion beneath cold glacier ice, beneath temperate glacier ice, and on ice-free mountain slopes is one of the key parameters in the development of relief during glacial periods. Detrital geochronology has been applied to the subglacial streams of the north face of the Mont-Blanc massif in order to estimate the efficiency of erosional processes there. Lithologically this area is composed of granite intruded at ~303 Ma within an older polymetamorphic complex. We use macroscopic features (on ~10,000 clasts) and U-Pb dating of zircon (~500 grains) to establish the provenance of the sediment transported by the glacier and its subglacial streams. The lithology of sediment collected from the surface and the base of the glacier is compared with the distribution of bedrock sources. The analysis of this distribution takes into account the glacier's surface flow lines, the surface areas beneath temperate and cold ice above and below the Equilibrium Line Altitude (ELA), and the extent of the watersheds of the three subglacial meltwater stream outlets located at altitudes of 2300 m, 1760 m and 1450 m. Comparison of the proportions of granite and metamorphics in these samples indicates that (1) glacial transport does not mix the clasts derived from subglacial erosion with the clasts derived from supraglacial deposition, except in the lower part of the ice tongue where supraglacial streams and moulins transfer the supraglacial load to the base of the glacier; (2) the glacial erosion rate beneath the tongue is lower than the erosion rate in adjacent non-glaciated areas; and (3) glacial erosion beneath cold ice is at least 16 times less efficient than erosion beneath temperate ice. The low rates of subglacial erosion on the north face of the Mont-Blanc massif mean that its glaciers are protecting "the roof of Europe" from erosion. A long-term effect of this might be a rise in the maximum altitude of the Alps.

  13. Monte Carlo MP2 on Many Graphical Processing Units.

    PubMed

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  14. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.

  15. An Overview of the NCC Spray/Monte-Carlo-PDF Computations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2000-01-01

    This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar Monte Carlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar Monte Carlo PDF and Lagrangian spray computer codes developed for application with unstructured grids and parallel computing. Detailed comparisons for the case of a reacting non-swirling spray clearly highlight the important role that chemistry/turbulence interactions play in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that some of the combustion occurs in a predominantly premixed-flame environment and the rest in a predominantly diffusion-flame environment. However, the non-PDF solution predicts wrongly for the combustion to occur in a vaporization-controlled regime. Near the premixed flame, the Monte Carlo particle temperature distribution shows two distinct peaks: one centered around the flame temperature and the other around the surrounding-gas temperature. Near the diffusion flame, the Monte Carlo particle temperature distribution shows a single peak. In both cases, the computed PDF's shape and strength are found to vary substantially depending upon the proximity to the flame surface. The results bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations. Finally, we end the paper by demonstrating the computational viability of the present solution procedure for its use in 3D combustor calculations by summarizing the results of a 3D test case with periodic boundary conditions. For the 3D case, the parallel performance of all the three solvers (CFD, PDF, and spray) has been found to be good when the computations were performed on a 24-processor SGI Origin work-station.

  16. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  17. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE PAGES

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...

    2016-09-29

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  18. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  19. EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.

  20. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  1. TEACHING COMPOSITION. WHAT RESEARCH SAYS TO THE TEACHER, NUMBER 18.

    ERIC Educational Resources Information Center

    BURROWS, ALVINA T.

    ALTHOUGH CHILDREN'S NEEDS FOR WRITTEN EXPRESSION PROBABLY PARALLEL THOSE OF ADULTS, THE REASON BEHIND CHILDREN'S CHOICE OF WRITING OVER SPEAKING IN GIVEN INSTANCES IS OPEN TO CONJECTURE. MOREOVER, THE COMMON ASSUMPTION BY TEACHERS THAT CHILDREN CAN AND SHOULD WRITE ABOUT PERSONAL INTERESTS OUGHT TO BE TEMPERED BY THE IDEA THAT MANY INTERESTS ARE…

  2. Monte Carlo simulations in X-ray imaging

    NASA Astrophysics Data System (ADS)

    Giersch, Jürgen; Durst, Jürgen

    2008-06-01

    Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.

  3. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  4. Accelerate quasi Monte Carlo method for solving systems of linear algebraic equations through shared memory

    NASA Astrophysics Data System (ADS)

    Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola

    2017-04-01

    In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.

  5. Multilevel Sequential2 Monte Carlo for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth

    2018-09-01

    The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.

  6. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  7. Exploiting molecular dynamics in Nested Sampling simulations of small peptides

    NASA Astrophysics Data System (ADS)

    Burkoff, Nikolas S.; Baldock, Robert J. N.; Várnai, Csilla; Wild, David L.; Csányi, Gábor

    2016-04-01

    Nested Sampling (NS) is a parameter space sampling algorithm which can be used for sampling the equilibrium thermodynamics of atomistic systems. NS has previously been used to explore the potential energy surface of a coarse-grained protein model and has significantly outperformed parallel tempering when calculating heat capacity curves of Lennard-Jones clusters. The original NS algorithm uses Monte Carlo (MC) moves; however, a variant, Galilean NS, has recently been introduced which allows NS to be incorporated into a molecular dynamics framework, so NS can be used for systems which lack efficient prescribed MC moves. In this work we demonstrate the applicability of Galilean NS to atomistic systems. We present an implementation of Galilean NS using the Amber molecular dynamics package and demonstrate its viability by sampling alanine dipeptide, both in vacuo and implicit solvent. Unlike previous studies of this system, we present the heat capacity curves of alanine dipeptide, whose calculation provides a stringent test for sampling algorithms. We also compare our results with those calculated using replica exchange molecular dynamics (REMD) and find good agreement. We show the computational effort required for accurate heat capacity estimation for small peptides. We also calculate the alanine dipeptide Ramachandran free energy surface for a range of temperatures and use it to compare the results using the latest Amber force field with previous theoretical and experimental results.

  8. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  9. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    USGS Publications Warehouse

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  10. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  11. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  12. Persistent random walk of cells involving anomalous effects and random death

    NASA Astrophysics Data System (ADS)

    Fedotov, Sergei; Tan, Abby; Zubarev, Andrey

    2015-04-01

    The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.

  13. Massively parallelized Monte Carlo software to calculate the light propagation in arbitrarily shaped 3D turbid media

    NASA Astrophysics Data System (ADS)

    Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin

    2017-07-01

    The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.

  14. Red spruce (Picea rubens Sarg.) cold hardiness and freezing injury susceptibility. Chapter 18

    Treesearch

    Donald H. DeHayes; Paul G. Schaberg; G.Richard Strimbeck

    2001-01-01

    To survive subfreezing winter temperatmes, perennial plant species have evolved tissue-specific mechanisms to undergo changes in freezing tolerance that parallel seasonal variations in climate. As such, most northern temperate tree species, including conifers, are adapted to the habitat and climatic conditions within their natural ranges and suffer little or no...

  15. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  16. Convergence, divergence, and parallelism in marine biodiversity trends: Integrating present-day and fossil data.

    PubMed

    Huang, Shan; Roy, Kaustuv; Valentine, James W; Jablonski, David

    2015-04-21

    Paleontological data provide essential insights into the processes shaping the spatial distribution of present-day biodiversity. Here, we combine biogeographic data with the fossil record to investigate the roles of parallelism (similar diversities reached via changes from similar starting points), convergence (similar diversities reached from different starting points), and divergence in shaping the present-day latitudinal diversity gradients of marine bivalves along the two North American coasts. Although both faunas show the expected overall poleward decline in species richness, the trends differ between the coasts, and the discrepancies are not explained simply by present-day temperature differences. Instead, the fossil record indicates that both coasts have declined in overall diversity over the past 3 My, but the western Atlantic fauna suffered more severe Pliocene-Pleistocene extinction than did the eastern Pacific. Tropical western Atlantic diversity remains lower than the eastern Pacific, but warm temperate western Atlantic diversity recovered to exceed that of the temperate eastern Pacific, either through immigration or in situ origination. At the clade level, bivalve families shared by the two coasts followed a variety of paths toward today's diversities. The drivers of these lineage-level differences remain unclear, but species with broad geographic ranges during the Pliocene were more likely than geographically restricted species to persist in the temperate zone, suggesting that past differences in geographic range sizes among clades may underlie between-coast contrasts. More detailed comparative work on regional extinction intensities and selectivities, and subsequent recoveries (by in situ speciation or immigration), is needed to better understand present-day diversity patterns and model future changes.

  17. Preference for ethanol in feeding and oviposition in temperate and tropical populations of Drosophila melanogaster

    PubMed Central

    Zhu, Jing; Fry, James D.

    2018-01-01

    The natural habitat of Drosophila melanogaster Meigen (Diptera: Drosophilidae) is fermenting fruits, which can be rich in ethanol. For unknown reasons, temperate populations of this cosmopolitan species have higher ethanol resistance than tropical populations. To determine whether this difference is accompanied by a parallel difference in preference for ethanol, we compared two European and two tropical African populations in feeding and oviposition preference for ethanol-supplemented medium. Although females of all populations laid significantly more eggs on medium with ethanol than on control medium, preference of European females for ethanol increased as ethanol concentration increased from 2 to 6%, whereas that of African females decreased. In feeding tests, African females preferred control medium over medium with 4% ethanol, whereas European females showed no preference. Males of all populations strongly preferred control medium. The combination of preference for ethanol in oviposition, and avoidance or neutrality in feeding, gives evidence that adults choose breeding sites with ethanol for the benefit of larvae, rather than for their own benefit. The stronger oviposition preference for ethanol of temperate than tropical females suggests that this benefit may be more important in temperate populations. Two possible benefits of ethanol for which there is some experimental evidence are cryoprotection and protection against natural enemies. PMID:29398715

  18. GPU accelerated population annealing algorithm

    NASA Astrophysics Data System (ADS)

    Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.

    2017-11-01

    Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature steps and multi-histogram reweighting. Additional comments: Code repository at https://github.com/LevBarash/PAising. The system size and size of the population of replicas are limited depending on the memory of the GPU device used. For the default parameter values used in the sample programs, L = 64, θ = 100, β0 = 0, βf = 1, Δβ = 0 . 005, R = 20 000, a typical run time on an NVIDIA Tesla K80 GPU is 151 seconds for the single spin coded (SSC) and 17 seconds for the multi-spin coded (MSC) program (see Section 2 for a description of these parameters).

  19. Uptake of allochthonous dissolved organic matter from soil and salmon in coastal temperate rainforest streams

    Treesearch

    Jason B. Fellman; Eran Hood; Richard T. Edwards; Jeremy B. Jones

    2009-01-01

    Dissolved organic matter (DOM) is an important component of aquatic food webs. We compare the uptake kinetics for NH4-N and different fractions of DOM during soil and salmon leachate additions by evaluating the uptake of organic forms of carbon (DOC) and nitrogen (DON), and proteinaceous DOM, as measured by parallel factor (PARAFAC) modeling of...

  20. LSPRAY-IV: A Lagrangian Spray Module

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    2012-01-01

    LSPRAY-IV is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray. Some important research areas covered as a part of the code development are: (1) the extension of combined CFD/scalar-Monte- Carlo-PDF method to spray modeling, (2) the multi-component liquid spray modeling, and (3) the assessment of various atomization models used in spray calculations. The current version contains the extension to the modeling of superheated sprays. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers.

  1. Implementing Shared Memory Parallelism in MCBEND

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Long, David; Dobson, Geoff

    2017-09-01

    MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  2. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  3. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  4. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  5. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    PubMed

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  6. Monte Carlo and Molecular Dynamics in the Multicanonical Ensemble: Connections between Wang-Landau Sampling and Metadynamics

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Perez, Danny; Junghans, Christoph

    2014-03-01

    We show direct formal relationships between the Wang-Landau iteration [PRL 86, 2050 (2001)], metadynamics [PNAS 99, 12562 (2002)] and statistical temperature molecular dynamics [PRL 97, 050601 (2006)], the major Monte Carlo and molecular dynamics work horses for sampling from a generalized, multicanonical ensemble. We aim at helping to consolidate the developments in the different areas by indicating how methodological advancements can be transferred in a straightforward way, avoiding the parallel, largely independent, developments tracks observed in the past.

  7. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  8. Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweezy, Jeremy Ed

    2016-01-21

    The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less

  9. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    PubMed

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  10. Parallel processing implementation for the coupled transport of photons and electrons using OpenMP

    NASA Astrophysics Data System (ADS)

    Doerner, Edgardo

    2016-05-01

    In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.

  11. Computational statistics using the Bayesian Inference Engine

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.

    2013-09-01

    This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.

  12. Probing Protein Fold Space with a Simplified Model

    PubMed Central

    Minary, Peter; Levitt, Michael

    2008-01-01

    We probe the stability and near-native energy landscape of protein fold space using powerful conformational sampling methods together with simple reduced models and statistical potentials. Fold space is represented by a set of 280 protein domains spanning all topological classes and having a wide range of lengths (0-300 residues), amino acid composition, and number of secondary structural elements. The degrees of freedom are taken as the loop torsion angles. This choice preserves the native secondary structure but allows the tertiary structure to change. The proteins are represented by three-point per residue, three-dimensional models with statistical potentials derived from a knowledge-based study of known protein structures. When this space is sampled by a combination of Parallel Tempering and Equi-Energy Monte Carlo, we find that the three-point model captures the known stability of protein native structures with stable energy basins that are near-native (all-α: 4.77 Å, all-β: 2.93 Å, α/β: 3.09 Å, α+β: 4.89 Å on average and within 6 Å for 71.41 %, 92.85 %, 94.29 % and 64.28 % for all-α, all-β, α/β and α+β, classes respectively). Denatured structures also occur and these have interesting structural properties that shed light on the different landscape characteristics of α and β folds. We find that α/β proteins with alternating α and β segments (such as the beta-barrel) are more stable than proteins in other fold classes. PMID:18054792

  13. Intensive measurements of gas, water, and energy exchange between vegetation and troposphere during the MONTES campaign in a vegetation gradient from short semi-desertic shrublands to tall wet temperate forests in the NW Mediterranean Basin

    NASA Astrophysics Data System (ADS)

    Peñuelas, J.; Guenther, A.; Rapparini, F.; Llusia, J.; Filella, I.; Seco, R.; Estiarte, M.; Mejia-Chang, M.; Ogaya, R.; Ibañez, J.; Sardans, J.; Castaño, L. M.; Turnipseed, A.; Duhl, T.; Harley, P.; Vila, J.; Estavillo, J. M.; Menéndez, S.; Facini, O.; Baraldi, R.; Geron, C.; Mak, J.; Patton, E. G.; Jiang, X.; Greenberg, J.

    2013-08-01

    MONTES (“Woodlands”) was a multidisciplinary international field campaign aimed at measuring energy, water and especially gas exchange between vegetation and atmosphere in a gradient from short semi-desertic shrublands to tall wet temperate forests in NE Spain in the North Western Mediterranean Basin (WMB). The measurements were performed at a semidesertic area (Monegros), at a coastal Mediterranean shrubland area (Garraf), at a typical Mediterranean holm oak forest area (Prades) and at a wet temperate beech forest (Montseny) during spring (April 2010) under optimal plant physiological conditions in driest-warmest sites and during summer (July 2010) with drought and heat stresses in the driest-warmest sites and optimal conditions in the wettest-coolest site. The objective of this campaign was to study the differences in gas, water and energy exchange occurring at different vegetation coverages and biomasses. Particular attention was devoted to quantitatively understand the exchange of biogenic volatile organic compounds (BVOCs) because of their biological and environmental effects in the WMB. A wide range of instruments (GC-MS, PTR-MS, meteorological sensors, O3 monitors,…) and vertical platforms such as masts, tethered balloons and aircraft were used to characterize the gas, water and energy exchange at increasing footprint areas by measuring vertical profiles. In this paper we provide an overview of the MONTES campaign: the objectives, the characterization of the biomass and gas, water and energy exchange in the 4 sites-areas using satellite data, the estimation of isoprene and monoterpene emissions using MEGAN model, the measurements performed and the first results. The isoprene and monoterpene emission rates estimated with MEGAN and emission factors measured at the foliar level for the dominant species ranged from about 0 to 0.2 mg m-2 h-1 in April. The warmer temperature in July resulted in higher model estimates from about 0 to ca. 1.6 mg m-2 h-1 for isoprene and ca. 4.5 mg m-2 h-1 for monoterpenes, depending on the site vegetation and footprint area considered. There were clear daily and seasonal patterns with higher emission rates and mixing ratios at midday and summer relative to early morning and early spring. There was a significant trend in CO2 fixation (from 1 to 10 mg C m-2 d-1), transpiration (from 1-5 kg C m-2 d-1), and sensible and latent heat from the warmest-driest to the coolest-wettest site. The results showed the strong land-cover-specific influence on emissions of BVOCs, gas, energy and water exchange, and therefore demonstrate the potential for feed-back to atmospheric chemistry and climate.

  14. Tectonic interpretations of Central Ishtar Terra (Venus) from Venera 15/16 and Magellan full-resolution radar images

    NASA Astrophysics Data System (ADS)

    Ansan, V.; Vergely, P.; Masson, P.

    1994-03-01

    For more than a decade, the mapping of Venus has revealed a surface that has had a complex volcanic and tectonic history, especially in the northern latitudes. Detailed morphostructural analysis and tectonic interpretations of Central Ishtar Terra, based both on Venera 15/16 and Magellan full-resolution radar images, have provided additional insight to the formation and evolution of Venusian terrains. Ishtar Terra, centered at 0 deg E longitude and 62 deg N latitude, consists of a broad high plateau, Lakshmi Planum, partly surrounded by two highlands, Freyja and Maxwell Montes, which have been interpreted as orogenic belts based on Venera 15 and 16 data. Lakshmi Planum, the oldest part of Ishtar Terra, is an extensive and complexly fractured plateau that can be compared to a terrestrial craton. The plateau is partially covered by fluid lava flows similar to the Deccan traps in India, which underwent a late stage of extensional fracturing. After the extensional deformation of Lakshmi Planum, Freyja and Maxwell Montes were created by regional E-W horizontal shortening that produced a series of N-S folds and thrusts. However, this regional arrangement of folds and thrusts is disturbed locally, e.g. the compressive deformation of Freyja Montes was closely controlled by parallel WNW-ESE-trending left-lateral shear zones and the northwestern part of Maxwell Montes seems to be extruded laterally to the southwest, which implies a second oblique thrust front overlapping Lakshmi Planum. These mountain belts also shows evidence of a late volcanic stage and a subsequent period of relaxation that created grabens parallel to the highland trends, especially in Maxwell Montes.

  15. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less

  16. Thomson scattering in magnetic fields. [of white dwarf stars

    NASA Technical Reports Server (NTRS)

    Whitney, Barbara

    1989-01-01

    The equation of transfer in Thomson scattering atmospheres with magnetic fields is solved using Monte Carlo methods. Two cases, a plane parallel atmosphere with a magnetic field perpendicular to the atmosphere, and a dipole star, are investigated. The wavelength dependence of polarization from plane-parallel atmosphere is qualitatively similar to that observed in the magnetic white dwarf Grw+70 deg 8247, and the field strength determined by the calculation, 320 MG, is quantitatively similar to that determined from the line spectrum. The dipole model does not resemble the data as well as the single plane-parallel atmosphere.

  17. Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granroth, Garrett E; Chen, Meili; Kohl, James Arthur

    2007-01-01

    Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less

  18. Monte Carlo simulations of particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.

    1994-01-01

    The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.

  19. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  20. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    NASA Astrophysics Data System (ADS)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  1. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-12-31

    Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less

  2. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-02-01

    Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  4. Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  5. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  6. Baseline and stress-induced levels of corticosterone in male and female Afrotropical and European temperate stonechats during breeding.

    PubMed

    Apfelbeck, Beate; Helm, Barbara; Illera, Juan Carlos; Mortega, Kim G; Smiddy, Patrick; Evans, Neil P

    2017-05-22

    Latitudinal variation in avian life histories falls along a slow-fast pace of life continuum: tropical species produce small clutches, but have a high survival probability, while in temperate species the opposite pattern is found. This study investigated whether differential investment into reproduction and survival of tropical and temperate species is paralleled by differences in the secretion of the vertebrate hormone corticosterone (CORT). Depending on circulating concentrations, CORT can both act as a metabolic (low to medium levels) and a stress hormone (high levels) and, thereby, influence reproductive decisions. Baseline and stress-induced CORT was measured across sequential stages of the breeding season in males and females of closely related taxa of stonechats (Saxicola spp) from a wide distribution area. We compared stonechats from 13 sites, representing Canary Islands, European temperate and East African tropical areas. Stonechats are highly seasonal breeders at all these sites, but vary between tropical and temperate regions with regard to reproductive investment and presumably also survival. In accordance with life-history theory, during parental stages, post-capture (baseline) CORT was overall lower in tropical than in temperate stonechats. However, during mating stages, tropical males had elevated post-capture (baseline) CORT concentrations, which did not differ from those of temperate males. Female and male mates of a pair showed correlated levels of post-capture CORT when sampled after simulated territorial intrusions. In contrast to the hypothesis that species with low reproduction and high annual survival should be more risk-sensitive, tropical stonechats had lower stress-induced CORT concentrations than temperate stonechats. We also found relatively high post-capture (baseline) and stress-induced CORT concentrations, in slow-paced Canary Islands stonechats. Our data support and refine the view that baseline CORT facilitates energetically demanding activities in males and females and reflects investment into reproduction. Low parental workload was associated with lower post-capture (baseline) CORT as expected for a slow pace of life in tropical species. On a finer resolution, however, this tropical-temperate contrast did not generally hold. Post-capture (baseline) CORT was higher during mating stages in particular in tropical males, possibly to support the energetic needs of mate-guarding. Counter to predictions based on life history theory, our data do not confirm the hypothesis that long-lived tropical populations have higher stress-induced CORT concentrations than short-lived temperate populations. Instead, in the predator-rich tropical environments of African stonechats, a dampened stress response during parental stages may increase survival probabilities of young. Overall our data further support an association between life history and baseline CORT, but challenge the role of stress-induced CORT as a mediator of tropical-temperate variation in life history.

  7. Electron heating in a Monte Carlo model of a high Mach number, supercritical, collisionless shock

    NASA Technical Reports Server (NTRS)

    Ellison, Donald C.; Jones, Frank C.

    1987-01-01

    Preliminary work in the investigation of electron injection and acceleration at parallel shocks is presented. A simple model of electron heating that is derived from a unified shock model which includes the effects of an electrostatic potential jump is described. The unified shock model provides a kinetic description of the injection and acceleration of ions and a fluid description of electron heating at high Mach number, supercritical, and parallel shocks.

  8. Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai

    2013-04-01

    The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.

  9. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less

  10. Suppressing correlations in massively parallel simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  11. Light requirements of Australian tropical vs. cool-temperate rainforest tree species show different relationships with seedling growth and functional traits.

    PubMed

    Lusk, Christopher H; Kelly, Jeff W G; Gleason, Sean M

    2013-03-01

    A trade-off between shade tolerance and growth in high light is thought to underlie the temporal dynamics of humid forests. On the other hand, it has been suggested that tree species sorting on temperature gradients involves a trade-off between growth rate and cold resistance. Little is known about how these two major trade-offs interact. Seedlings of Australian tropical and cool-temperate rainforest trees were grown in glasshouse environments to compare growth versus shade-tolerance trade-offs in these two assemblages. Biomass distribution, photosynthetic capacity and vessel diameters were measured in order to examine the functional correlates of species differences in light requirements and growth rate. Species light requirements were assessed by field estimation of the light compensation point for stem growth. Light-demanding and shade-tolerant tropical species differed markedly in relative growth rates (RGR), but this trend was less evident among temperate species. This pattern was paralleled by biomass distribution data: specific leaf area (SLA) and leaf area ratio (LAR) of tropical species were significantly positively correlated with compensation points, but not those of cool-temperate species. The relatively slow growth and small SLA and LAR of Tasmanian light-demanders were associated with narrow vessels and low potential sapwood conductivity. The conservative xylem traits, small LAR and modest RGR of Tasmanian light-demanders are consistent with selection for resistance to freeze-thaw embolism, at the expense of growth rate. Whereas competition for light favours rapid growth in light-demanding trees native to environments with warm, frost-free growing seasons, frost resistance may be an equally important determinant of the fitness of light-demanders in cool-temperate rainforest, as seedlings establishing in large openings are exposed to sub-zero temperatures that can occur throughout most of the year.

  12. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  13. The eco-epidemiology of Triatoma infestans in the temperate Monte Desert ecoregion of mid-western Argentina

    PubMed Central

    Carbajal-de-la-Fuente, Ana Laura; Provecho, Yael Mariana; Fernández, María del Pilar; Cardinal, Marta Victoria; Lencina, Patricia; Spillmann, Cynthia; Gürtler, Ricardo Esteban

    2017-01-01

    BACKGROUND The eco-epidemiological status of Chagas disease in the Monte Desert ecoregion of western Argentina is largely unknown. We investigated the environmental and socio-demographic determinants of house infestation with Triatoma infestans, bug abundance, vector infection with Trypanosoma cruzi and host-feeding sources in a well-defined rural area of Lavalle Department in the Mendoza province. METHODS Technical personnel inspected 198 houses for evidence of infestation with T. infestans, and the 76 houses included in the current study were re-inspected. In parallel with the vector survey, an environmental and socio-demographic survey was also conducted. Univariate risk factor analysis for domiciliary infestation was carried out using Firth penalised logistic regression. We fitted generalised linear models for house infestation and bug abundance. Blood meals were tested with a direct ELISA assay, and T. cruzi infection was determined using a hot-start polymerase chain reaction (PCR) targeting the kinetoplast minicircle (kDNA-PCR). FINDINGS The households studied included an aged population living in precarious houses whose main economic activities included goat husbandry. T. infestans was found in 21.2% of 198 houses and in 55.3% of the 76 re-inspected houses. Peridomestic habitats exhibited higher infestation rates and bug abundances than did domiciles, and goat corrals showed high levels of infestation. The main host-feeding sources were goats. Vector infection was present in 10.2% of domiciles and 3.2% of peridomiciles. Generalised linear models showed that peridomestic infestation was positively and significantly associated with the presence of mud walls and the abundance of chickens and goats, and bug abundance increased with the number of all hosts except rabbits. MAIN CONCLUSIONS We highlight the relative importance of specific peridomestic structures (i.e., goat corrals and chicken coops) associated with construction materials and host abundance as sources of persistent bug infestation driving domestic colonisation. Environmental management strategies framed in a community-based programme combined with improved insecticide spraying and sustained vector surveillance are needed to effectively suppress local T. infestans populations. PMID:28953998

  14. A numerical analysis of plasma non-uniformity in the parallel plate VHF-CCP and the comparison among various model

    NASA Astrophysics Data System (ADS)

    Sawada, Ikuo

    2012-10-01

    We measured the radial distribution of electron density in a 200 mm parallel plate CCP and compared it with results from numerical simulations. The experiments were conducted with pure Ar gas with pressures ranging from 15 to 100 mTorr and 60 MHz applied at the top electrode with powers from 500 to 2000W. The measured electron profile is peaked in the center, and the relative non-uniformity is higher at 100 mTorr than at 15 mTorr. We compare the experimental results with simulations with both HPEM and Monte-Carlo/PIC codes. In HPEM simulations, we used either fluid or electron Monte-Carlo module, and the Poisson or the Electromagnetic solver. None of the models were able to duplicate the experimental results quantitatively. However, HPEM with the electron Monte-Carlo module and PIC qualitatively matched the experimental results. We will discuss the results from these models and how they illuminate the mechanism of enhanced electron central peak.[4pt] [1] T. Oshita, M. Matsukuma, S.Y. Kang, I. Sawada: The effect of non-uniform RF voltage in a CCP discharge, The 57^th JSAP Spring Meeting 2010[4pt] [2] I. Sawada, K. Matsuzaki, S.Y. Kang, T. Ohshita, M. Kawakami, S. Segawa: 1-st IC-PLANTS, 2008

  15. Model of formation of Ishtar Terra, Venus

    NASA Astrophysics Data System (ADS)

    Ansan, V.; Vergely, P.; Masson, Ph.

    1996-08-01

    For more than a decade, the radar mapping of Venus' surface has revealed that it results from a complex volcanic and tectonic history, especially in the northern latitudes. Ishtar Terra (0°E-62°E) consists of a high plateau, Lakshmi Planum, surrounded by highlands, Freyja Montes to the north and Maxwell Montes to the east. The latter is the highest relief of Venus, standing more than 10 km in elevation. The high resolution of Magellan radar images (120-300 m) allows us to interpret them in terms of tectonics and propose a model of formation for the central part of Ishtar Terra. The detailed tectonic interpretations are based on detailed structural and geologic cartography. The geologic history of Ishtar Terra resulted from two distinct, opposite tectonic stages with an important, transitional volcanic activity. First, Lakshmi Planum, the oldest part of Ishtar Terra is an extensive and complexly fractured plateau that can be compared to a terrestrial craton. Then the plateau is partially covered by fluid lava flows that may be similar to Deccan traps, in India. Second, after the extensional deformation of Lakshmi Planum and its volcanic activity, Freyja and Maxwell Montes formed by WSW-ENE horizontal crustal shortening. The latter produced a series of NNW-SSE parallel, sinuous, folds and imbricated structures that overlapped Lakshmi Planum westward. So these mountain belts have the same structural characteristics as terrestrial fold-and-thrust belts. These mountain belts also display evidence of a late volcanic stage and a subsequent period of relaxation that created grabens parallel to the highland trend, especially in Maxwell Montes.

  16. Many-integrated core (MIC) technology for accelerating Monte Carlo simulation of radiation transport: A study based on the code DPM

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Brualla, L.

    2018-04-01

    Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.

  17. Bayesian approach to analyzing holograms of colloidal particles.

    PubMed

    Dimiduk, Thomas G; Manoharan, Vinothan N

    2016-10-17

    We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.

  18. Monte Carlo Analysis as a Trajectory Design Driver for the Transiting Exoplanet Survey Satellite (TESS) Mission

    NASA Technical Reports Server (NTRS)

    Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.

  19. Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor

    NASA Astrophysics Data System (ADS)

    Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert

    2009-10-01

    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.

  20. Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate

    NASA Technical Reports Server (NTRS)

    Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel

    1994-01-01

    This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.

  1. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  2. Non-Thermal Spectra from Pulsar Magnetospheres in the Full Electromagnetic Cascade Scenario

    NASA Astrophysics Data System (ADS)

    Peng, Qi-Yong; Zhang, Li

    2008-08-01

    We simulated non-thermal emission from a pulsar magnetosphere within the framework of a full polar-cap cascade scenario by taking the acceleration gap into account, using the Monte Carlo method. For a given electric field parallel to open field lines located at some height above the surface of a neutron star, primary electrons were accelerated by parallel electric fields and lost their energies by curvature radiation; these photons were converted to electron-positron pairs, which emitted photons through subsequent quantum synchrotron radiation and inverse Compton scattering, leading to a cascade. In our calculations, the acceleration gap was assumed to be high above the stellar surface (about several stellar radii); the primary and secondary particles and photons emitted during the journey of those particles in the magnetosphere were traced using the Monte Carlo method. In such a scenario, we calculated the non-thermal photon spectra for different pulsar parameters and compared the model results for two normal pulsars and one millisecond pulsar with the observed data.

  3. Evaluation of Hamaker coefficients using Diffusion Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Maezono, Ryo; Hongo, Kenta

    We evaluated the Hamaker's constant for Cyclohexasilane to investigate its wettability, which is used as an ink of 'liquid silicon' in 'printed electronics'. Taking three representative geometries of the dimer coalescence (parallel, lined, and T-shaped), we evaluated these binding curves using diffusion Monte Carlo method. The parallel geometry gave the most long-ranged exponent, ~ 1 /r6 , in its asymptotic behavior. Evaluated binding lengths are fairly consistent with the experimental density of the molecule. The fitting of the asymptotic curve gave an estimation of Hamaker's constant being around 100 [zJ]. We also performed a CCSD(T) evaluation and got almost similar result. To check its justification, we applied the same scheme to Benzene and compared the estimation with those by other established methods, Lifshitz theory and SAPT (Symmetry-adopted perturbation theory). The result by the fitting scheme turned to be twice larger than those by Lifshitz and SAPT, both of which coincide with each other. It is hence implied that the present evaluation for Cyclohexasilane would be overestimated.

  4. Liquid-liquid transition in the ST2 model of water

    NASA Astrophysics Data System (ADS)

    Debenedetti, Pablo

    2013-03-01

    We present clear evidence of the existence of a metastable liquid-liquid phase transition in the ST2 model of water. Using four different techniques (the weighted histogram analysis method with single-particle moves, well-tempered metadynamics with single-particle moves, weighted histograms with parallel tempering and collective particle moves, and conventional molecular dynamics), we calculate the free energy surface over a range of thermodynamic conditions, we perform a finite size scaling analysis for the free energy barrier between the coexisting liquid phases, we demonstrate the attainment of diffusive behavior, and we perform stringent thermodynamic consistency checks. The results provide conclusive evidence of a first-order liquid-liquid transition. We also show that structural equilibration in the sluggish low-density phase is attained over the time scale of our simulations, and that crystallization times are significantly longer than structural equilibration, even under deeply supercooled conditions. We place our results in the context of the theory of metastability.

  5. Forest turnover rates follow global and regional patterns of productivity

    USGS Publications Warehouse

    Stephenson, N.L.; van Mantgem, P.J.

    2005-01-01

    Using a global database, we found that forest turnover rates (the average of tree mortality and recruitment rates) parallel broad-scale patterns of net primary productivity. First, forest turnover was higher in tropical than in temperate forests. Second, as recently demonstrated by others, Amazonian forest turnover was higher on fertile than infertile soils. Third, within temperate latitudes, turnover was highest in angiosperm forests, intermediate in mixed forests, and lowest in gymnosperm forests. Finally, within a single forest physiognomic type, turnover declined sharply with elevation (hence with temperature). These patterns of turnover in populations of trees are broadly similar to the patterns of turnover in populations of plant organs (leaves and roots) found in other studies. Our findings suggest a link between forest mass balance and the population dynamics of trees, and have implications for understanding and predicting the effects of environmental changes on forest structure and terrestrial carbon dynamics. ??2005 Blackwell Publishing Ltd/CNRS.

  6. Replica-exchange Wang Landau sampling: pushing the limits of Monte Carlo simulations in materials sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus

    We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.

  7. Multicanonical hybrid Monte Carlo algorithm: Boosting simulations of compact QED

    NASA Astrophysics Data System (ADS)

    Arnold, G.; Schilling, K.; Lippert, Th.

    1999-03-01

    We demonstrate that substantial progress can be achieved in the study of the phase structure of four-dimensional compact QED by a joint use of hybrid Monte Carlo and multicanonical algorithms through an efficient parallel implementation. This is borne out by the observation of considerable speedup of tunnelling between the metastable states, close to the phase transition, on the Wilson line. We estimate that the creation of adequate samples (with order 100 flip-flops) becomes a matter of half a year's run time at 2 Gflops sustained performance for lattices of size up to 244.

  8. Driven-dissipative quantum Monte Carlo method for open quantum systems

    NASA Astrophysics Data System (ADS)

    Nagy, Alexandra; Savona, Vincenzo

    2018-05-01

    We develop a real-time full configuration-interaction quantum Monte Carlo approach to model driven-dissipative open quantum systems with Markovian system-bath coupling. The method enables stochastic sampling of the Liouville-von Neumann time evolution of the density matrix thanks to a massively parallel algorithm, thus providing estimates of observables on the nonequilibrium steady state. We present the underlying theory and introduce an initiator technique and importance sampling to reduce the statistical error. Finally, we demonstrate the efficiency of our approach by applying it to the driven-dissipative two-dimensional X Y Z spin-1/2 model on a lattice.

  9. Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.

    PubMed

    Florin, Charles; Paragios, Nikos; Williams, Jim

    2005-01-01

    In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.

  10. A Bayesian re-analysis of HD 11964 extrasolar planet data

    NASA Astrophysics Data System (ADS)

    Gregory, Philip C.

    2007-05-01

    A Bayesian multi-planet Kepler periodogram has been developed for the analysis of precision radial velocity data (Gregory, ApJ, 631, 1198, 2005 and Astro-ph/0609229). The periodogram employs a parallel tempering Markov chain Monte Carlo algorithm. The HD 11964 data (Butler et al. ApJ, 646, 505, 2006) has been re-analyzed using 1, 2, 3 and 4 planet models. Each model incorporates an extra noise parameter which can allow for additional independent Gaussian noise beyond the known measurement uncertainties. The most probable model exhibits three periods of 38.02-0.05+0.06, 360-4+4 and 1924-43+44d, and eccentricities of 0.22-0.22+0.11, 0.63-0.17+0.34 and 0.05-0.05+0.03, respectively. Assuming the three signals (each one consistent with a Keplerian orbit) are caused by planets, the corresponding limits on planetary mass (M sin i) and semi-major axis are (0.090-0.14+0.15 MJ, 0.253-0.009+0.009 au), (0.21-0.02+0.05 MJ, 1.13-0.04+0.04 au), (0.77-0.08+0.08 MJ, 3.46-0.13+0.13 au), respectively. The small difference (1.3 sigma) between the 360 day period and one year suggests that it might be worth investigating the barycentric correction for the HD 11964 data. This research was supported in part by a grant from the Canadian Natural Sciences and Engineering Research Council of Canada at the University of British Columbia.

  11. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  12. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  13. al3c: high-performance software for parameter inference using Approximate Bayesian Computation.

    PubMed

    Stram, Alexander H; Marjoram, Paul; Chen, Gary K

    2015-11-01

    The development of Approximate Bayesian Computation (ABC) algorithms for parameter inference which are both computationally efficient and scalable in parallel computing environments is an important area of research. Monte Carlo rejection sampling, a fundamental component of ABC algorithms, is trivial to distribute over multiple processors but is inherently inefficient. While development of algorithms such as ABC Sequential Monte Carlo (ABC-SMC) help address the inherent inefficiencies of rejection sampling, such approaches are not as easily scaled on multiple processors. As a result, current Bayesian inference software offerings that use ABC-SMC lack the ability to scale in parallel computing environments. We present al3c, a C++ framework for implementing ABC-SMC in parallel. By requiring only that users define essential functions such as the simulation model and prior distribution function, al3c abstracts the user from both the complexities of parallel programming and the details of the ABC-SMC algorithm. By using the al3c framework, the user is able to scale the ABC-SMC algorithm in parallel computing environments for his or her specific application, with minimal programming overhead. al3c is offered as a static binary for Linux and OS-X computing environments. The user completes an XML configuration file and C++ plug-in template for the specific application, which are used by al3c to obtain the desired results. Users can download the static binaries, source code, reference documentation and examples (including those in this article) by visiting https://github.com/ahstram/al3c. astram@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Recent advances in PDF modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Leonard, Andrew D.; Dai, F.

    1995-01-01

    This viewgraph presentation concludes that a Monte Carlo probability density function (PDF) solution successfully couples with an existing finite volume code; PDF solution method applied to turbulent reacting flows shows good agreement with data; and PDF methods must be run on parallel machines for practical use.

  15. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brantley, Patrick; Dawson, Shawn; McKinley, Scott

    2016-04-20

    The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less

  16. Investigation of radiative interaction in laminar flows using Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is employed to study the radiative interactions in fully developed laminar flow between two parallel plates. Taking advantage of the characteristics of easy mathematical treatment of the MCM, a general numerical procedure is developed for nongray radiative interaction. The nongray model is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. To validate the Monte Carlo simulation for nongray radiation problems, the results of radiative dissipation from the MCM are compared with two available solutions for a given temperature profile between two plates. After this validation, the MCM is employed to solve the present physical problem and results for the bulk temperature are compared with available solutions. In general, good agreement is noted and reasons for some discrepancies in certain ranges of parameters are explained.

  17. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less

  18. The Wang Landau parallel algorithm for the simple grids. Optimizing OpenMPI parallel implementation

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.

    2017-12-01

    The Wang Landau Monte Carlo algorithm to calculate density of states for the different simple spin lattices was implemented. The energy space was split between the individual threads and balanced according to the expected runtime for the individual processes. Custom spin clustering mechanism, necessary for overcoming of the critical slowdown in the certain energy subspaces, was devised. Stable reconstruction of the density of states was of primary importance. Some data post-processing techniques were involved to produce the expected smooth density of states.

  19. A Massively Parallel Code for Polarization Calculations

    NASA Astrophysics Data System (ADS)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  20. SU-E-T-455: Characterization of 3D Printed Materials for Proton Beam Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, W; Siderits, R; McKenna, M

    2014-06-01

    Purpose: The widespread availability of low cost 3D printing technologies provides an alternative fabrication method for customized proton range modifying accessories such as compensators and boluses. However the material properties of the printed object are dependent on the printing technology used. In order to facilitate the application of 3D printing in proton therapy, this study investigated the stopping power of several printed materials using both proton pencil beam measurements and Monte Carlo simulations. Methods: Five 3–4 cm cubes fabricated using three 3D printing technologies (selective laser sintering, fused-deposition modeling and stereolithography) from five printers were investigated. The cubes were scannedmore » on a CT scanner and the depth dose curves for a mono-energetic pencil beam passing through the material were measured using a large parallel plate ion chamber in a water tank. Each cube was measured from two directions (perpendicular and parallel to printing plane) to evaluate the effects of the anisotropic material layout. The results were compared with GEANT4 Monte Carlo simulation using the manufacturer specified material density and chemical composition data. Results: Compared with water, the differences from the range pull back by the printed blocks varied and corresponded well with the material CT Hounsfield unit. The measurement results were in agreement with Monte Carlo simulation. However, depending on the technology, inhomogeneity existed in the printed cubes evidenced from CT images. The effect of such inhomogeneity on the proton beam is to be investigated. Conclusion: Printed blocks by three different 3D printing technologies were characterized for proton beam with measurements and Monte Carlo simulation. The effects of the printing technologies in proton range and stopping power were studied. The derived results can be applied when specific devices are used in proton radiotherapy.« less

  1. Present-day subglacial erosion efficiency inferred from sources and transport of glacial clasts in the North face of Mont Blanc

    NASA Astrophysics Data System (ADS)

    Mugnier, J. L.; Godon, C.; Buoncristiani, J. F.; Paquette, J. L.; Trouvé, E.

    2012-04-01

    The efficiency of erosional processes is classically considered from detrital composition at the outlet of a shed that reflects the rocks eroded within the shed. We adapt fluvial detrital thermochronology (DeCelles et al., 2004) and lithology (Attal and Lavé, 2006) methods to the subglacial streams of the north face of the Mont Blanc. The lithology of this area is composed by a ~303 Ma old granite intruded within an older poly metamorphic complex (orthogneisses). In this study,we use macroscopic criteria (~10 000 clasts) and Ur/Pb dating of zircons (~500 datings of sand grains) to determine the provenance of the sediment transported by the glacier and by the sub-glacial streams. Samples come from sediments collected around the glacier (above, below or laterally), from different bedrocks sources according to the surface flow lines and glacier characteristics (above or below the ELA; temperate or cold), and from different subglacial streams. A comparison between the proportion of granite and orthogneisses in these samples indicates that: 1) the supra load follows the flow lines of the glacier deduced from SAR images correlation and the displacement pattern excludes supra load mixing of the different sources; 2) the transport by the glacier does not mix the clasts issued from the sub-glacial erosion with the clasts issued from supraglacial deposition, except in the lower tongue where supraglacial streams and moulins move the supraglacial load from top to bottom; 3) the erosion rate beneath the glacier is very small: null beneath the cold ice but also very weak beneath the greatest part of the temperate glacier; the erosion increases significantly beneath the tongue, where supraglacial load incorporated at the base favors abrasion; 4) the glacial erosion rate beneath the tongue remains at least five time smaller than the erosion rate coming from non-glacial area. According to our results, we demonstrate that the glaciers of the Mont-Blanc north face protect the top of Europe from erosion. DeCelles et al., 2004, Earth and Planetary Science Letters, v. 227, p. 313-330. Attal and Lavé, 2006, Geol. Soc. Am. Spec. Publ. (S.D. Willett, N. Hovius, M.T. Brandon and D. Fisher, eds.), 398, p. 143-171.

  2. SKIRT: Hybrid parallelization of radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.

    2017-07-01

    We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.

  3. A Bayesian approach to earthquake source studies

    NASA Astrophysics Data System (ADS)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.

  4. Monte Carlo simulation of collisionless shocks showing preferential acceleration of high A/Z particles. [in cosmic rays

    NASA Technical Reports Server (NTRS)

    Ellison, D. C.; Jones, F. C.; Eichler, D.

    1981-01-01

    A collisionless quasi-parallel shock is simulated by Monte Carlo techniques. The scattering of all velocity particles from thermal to high energy is assumed to occur so that the mean free path is directly proportional to velocity times the mass-to-charge-ratio, and inversely proporational to the plasma density. The shock profile and velocity spectra are obtained, showing preferential acceleration of high A/Z particles relative to protons. The inclusion of the back pressure of the scattering particles on the inflowing plasma produces a smoothing of the shock profile, which implies that the spectra are steeper than for a discontinuous shock.

  5. A Monte Carlo study of the spin-1 Blume-Emery-Griffiths phase diagrams within biquadratic exchange anisotropy

    NASA Astrophysics Data System (ADS)

    Dani, Ibtissam; Tahiri, Najim; Ez-Zahraouy, Hamid; Benyoussef, Abdelilah

    2014-08-01

    The effect of the bi-quadratic exchange coupling anisotropy on the phase diagram of the spin-1 Blume-Emery-Griffiths model on simple-cubic lattice is investigated using mean field theory (MFT) and Monte Carlo simulation (MC). It is found that the anisotropy of the biquadratic coupling favors the stability of the ferromagnetic phase. By decreasing the parallel and/or perpendicular bi-quadratic coupling, the ferrimagnetic and the antiquadrupolar phases broaden in contrast, the ferromagnetic and the disordered phases become narrow. The behavior of magnetization and quadrupolar moment as a function of temperature is also computed, especially in the ferrimagnetic phase.

  6. Monte Carlo simulation of a noisy quantum channel with memory.

    PubMed

    Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco

    2015-10-01

    The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.

  7. Parallel Tempering of Dark Matter from the Ebola Virus Proteome: Comparison of CHARMM36m and CHARMM22 Force Fields with Implicit Solvent and Coarse Grained Model

    DTIC Science & Technology

    2017-08-10

    simulation models the conformational plasticity along the helix-forming reaction coordinate was limited by free - energy barriers. By comparison the coarse...revealed. The latter becomes evident in comparing the energy Z-score landscapes , where CHARMM22 simulation shows a manifold of shuttling...solvent simulations of calculating the charging free energy of protein conformations.33 Deviation to the protocol by modification of Born radii

  8. Optimal estimates of free energies from multistate nonequilibrium work data.

    PubMed

    Maragakis, Paul; Spichty, Martin; Karplus, Martin

    2006-03-17

    We derive the optimal estimates of the free energies of an arbitrary number of thermodynamic states from nonequilibrium work measurements; the work data are collected from forward and reverse switching processes and obey a fluctuation theorem. The maximum likelihood formulation properly reweights all pathways contributing to a free energy difference and is directly applicable to simulations and experiments. We demonstrate dramatic gains in efficiency by combining the analysis with parallel tempering simulations for alchemical mutations of model amino acids.

  9. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Ellis; Derek Gaston; Benoit Forget

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less

  10. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  11. Monte Carlo simulation of biomolecular systems with BIOMCSIM

    NASA Astrophysics Data System (ADS)

    Kamberaj, H.; Helms, V.

    2001-12-01

    A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.

  12. Massively parallel simulator of optical coherence tomography of inhomogeneous turbid media.

    PubMed

    Malektaji, Siavash; Lima, Ivan T; Escobar I, Mauricio R; Sherif, Sherif S

    2017-10-01

    An accurate and practical simulator for Optical Coherence Tomography (OCT) could be an important tool to study the underlying physical phenomena in OCT such as multiple light scattering. Recently, many researchers have investigated simulation of OCT of turbid media, e.g., tissue, using Monte Carlo methods. The main drawback of these earlier simulators is the long computational time required to produce accurate results. We developed a massively parallel simulator of OCT of inhomogeneous turbid media that obtains both Class I diffusive reflectivity, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectivity due to multiply scattered photons. This Monte Carlo-based simulator is implemented on graphic processing units (GPUs), using the Compute Unified Device Architecture (CUDA) platform and programming model, to exploit the parallel nature of propagation of photons in tissue. It models an arbitrary shaped sample medium as a tetrahedron-based mesh and uses an advanced importance sampling scheme. This new simulator speeds up simulations of OCT of inhomogeneous turbid media by about two orders of magnitude. To demonstrate this result, we have compared the computation times of our new parallel simulator and its serial counterpart using two samples of inhomogeneous turbid media. We have shown that our parallel implementation reduced simulation time of OCT of the first sample medium from 407 min to 92 min by using a single GPU card, to 12 min by using 8 GPU cards and to 7 min by using 16 GPU cards. For the second sample medium, the OCT simulation time was reduced from 209 h to 35.6 h by using a single GPU card, and to 4.65 h by using 8 GPU cards, and to only 2 h by using 16 GPU cards. Therefore our new parallel simulator is considerably more practical to use than its central processing unit (CPU)-based counterpart. Our new parallel OCT simulator could be a practical tool to study the different physical phenomena underlying OCT, or to design OCT systems with improved performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  14. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  15. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  16. A novel radiation detector for removing scattered radiation in chest radiography: Monte Carlo simulation-based performance evaluation

    NASA Astrophysics Data System (ADS)

    Roh, Y. H.; Yoon, Y.; Kim, K.; Kim, J.; Kim, J.; Morishita, J.

    2016-10-01

    Scattered radiation is the main reason for the degradation of image quality and the increased patient exposure dose in diagnostic radiology. In an effort to reduce scattered radiation, a novel structure of an indirect flat panel detector has been proposed. In this study, a performance evaluation of the novel system in terms of image contrast as well as an estimation of the number of photons incident on the detector and the grid exposure factor were conducted using Monte Carlo simulations. The image contrast of the proposed system was superior to that of the no-grid system but slightly inferior to that of the parallel-grid system. The number of photons incident on the detector and the grid exposure factor of the novel system were higher than those of the parallel-grid system but lower than those of the no-grid system. The proposed system exhibited the potential for reduced exposure dose without image quality degradation; additionally, can be further improved by a structural optimization considering the manufacturer's specifications of its lead contents.

  17. Resonance line transfer calculations by doubling thin layers. I - Comparison with other techniques. II - The use of the R-parallel redistribution function. [planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Yelle, Roger V.; Wallace, Lloyd

    1989-01-01

    A versatile and efficient technique for the solution of the resonance line scattering problem with frequency redistribution in planetary atmospheres is introduced. Similar to the doubling approach commonly used in monochromatic scattering problems, the technique has been extended to include the frequency dependence of the radiation field. Methods for solving problems with external or internal sources and coupled spectral lines are presented, along with comparison of some sample calculations with results from Monte Carlo and Feautrier techniques. The doubling technique has also been applied to the solution of resonance line scattering problems where the R-parallel redistribution function is appropriate, both neglecting and including polarization as developed by Yelle and Wallace (1989). With the constraint that the atmosphere is illuminated from the zenith, the only difficulty of consequence is that of performing precise frequency integrations over the line profiles. With that problem solved, it is no longer necessary to use the Monte Carlo method to solve this class of problem.

  18. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  19. Particle in cell/Monte Carlo collision analysis of the problem of identification of impurities in the gas by the plasma electron spectroscopy method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kusoglu Sarikaya, C.; Rafatov, I., E-mail: rafatov@metu.edu.tr; Kudryavtsev, A. A.

    2016-06-15

    The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the codemore » is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.« less

  20. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Enhanced configurational sampling with hybrid non-equilibrium molecular dynamics-Monte Carlo propagator

    NASA Astrophysics Data System (ADS)

    Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît

    2018-01-01

    Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.

  2. Enhanced configurational sampling with hybrid non-equilibrium molecular dynamics-Monte Carlo propagator.

    PubMed

    Suh, Donghyuk; Radak, Brian K; Chipot, Christophe; Roux, Benoît

    2018-01-07

    Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.

  3. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Kausik, E-mail: kausik.chatterjee@aggiemail.usu.edu; Center for Atmospheric and Space Sciences, Utah State University, Logan, UT 84322; Roadcap, John R., E-mail: john.roadcap@us.af.mil

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals ofmore » the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.« less

  4. Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2014-01-01

    In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.

  5. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  6. Venus - Lakshmi Planum and Maxwell Montes

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This Magellan full resolution radar image is centered at 65 degrees north latitude, zero degrees east longitude, along the eastern edge of Lakshmi Planum and the western edge of Maxwell Montes. The plains of Lakshmi are made up of radar-dark, homogeneous, smooth lava flows. Located near the center of the image is a feature previously mapped as tessera made up of intersecting 1- to 2-km (0.6 to 1.2 miles) wide graven. The abrupt termination of dark plains against this feature indicates that it has been partially covered by lava. Additional blocks of tessera are located along the left hand edge of the image. A series of linear parallel troughs are located along the southern edge of the image. These features, 60- to 120-km (36- to 72- miles) long and 10- to 40- km (6- to 24- miles) wide are interpreted as graben. Located along the right hand part of the image is Maxwell Montes, the highest mountain on the planet, rising to an elevation of 11.5 km (7 miles) and is part of a series of mountain belts surrounding Lakshmi Planum. The western edge of Maxwell shown in this image rises sharply, 5.0 km (3.0 miles), above the adjacent plains in Lakshmi Planum. Maxwell is made up of parallel ridges 2- to 7-km (1.2- to 4.2 miles) apart and is interpreted to have formed by compressional tectonics. The image is 300 km (180 miles) wide.

  7. Monte Carlo study of the impact of a magnetic field on the dose distribution in MRI-guided HDR brachytherapy using Ir-192

    NASA Astrophysics Data System (ADS)

    Beld, E.; Seevinck, P. R.; Lagendijk, J. J. W.; Viergever, M. A.; Moerland, M. A.

    2016-09-01

    In the process of developing a robotic MRI-guided high-dose-rate (HDR) prostate brachytherapy treatment, the influence of the MRI scanner’s magnetic field on the dose distribution needs to be investigated. A magnetic field causes a deflection of electrons in the plane perpendicular to the magnetic field, and it leads to less lateral scattering along the direction parallel with the magnetic field. Monte Carlo simulations were carried out to determine the influence of the magnetic field on the electron behavior and on the total dose distribution around an Ir-192 source. Furthermore, the influence of air pockets being present near the source was studied. The Monte Carlo package Geant4 was utilized for the simulations. The simulated geometries consisted of a simplified point source inside a water phantom. Magnetic field strengths of 0 T, 1.5 T, 3 T, and 7 T were considered. The simulation results demonstrated that the dose distribution was nearly unaffected by the magnetic field for all investigated magnetic field strengths. Evidence was found that, from a dose perspective, the HDR prostate brachytherapy treatment using Ir-192 can be performed safely inside the MRI scanner. No need was found to account for the magnetic field during treatment planning. Nevertheless, the presence of air pockets in close vicinity to the source, particularly along the direction parallel with the magnetic field, appeared to be an important point for consideration.

  8. Monte Carlo study of the impact of a magnetic field on the dose distribution in MRI-guided HDR brachytherapy using Ir-192.

    PubMed

    Beld, E; Seevinck, P R; Lagendijk, J J W; Viergever, M A; Moerland, M A

    2016-09-21

    In the process of developing a robotic MRI-guided high-dose-rate (HDR) prostate brachytherapy treatment, the influence of the MRI scanner's magnetic field on the dose distribution needs to be investigated. A magnetic field causes a deflection of electrons in the plane perpendicular to the magnetic field, and it leads to less lateral scattering along the direction parallel with the magnetic field. Monte Carlo simulations were carried out to determine the influence of the magnetic field on the electron behavior and on the total dose distribution around an Ir-192 source. Furthermore, the influence of air pockets being present near the source was studied. The Monte Carlo package Geant4 was utilized for the simulations. The simulated geometries consisted of a simplified point source inside a water phantom. Magnetic field strengths of 0 T, 1.5 T, 3 T, and 7 T were considered. The simulation results demonstrated that the dose distribution was nearly unaffected by the magnetic field for all investigated magnetic field strengths. Evidence was found that, from a dose perspective, the HDR prostate brachytherapy treatment using Ir-192 can be performed safely inside the MRI scanner. No need was found to account for the magnetic field during treatment planning. Nevertheless, the presence of air pockets in close vicinity to the source, particularly along the direction parallel with the magnetic field, appeared to be an important point for consideration.

  9. Free Energy Landscape of GAGA and UUCG RNA Tetraloops.

    PubMed

    Bottaro, Sandro; Banáš, Pavel; Šponer, Jiří; Bussi, Giovanni

    2016-10-20

    We report the folding thermodynamics of ccUUCGgg and ccGAGAgg RNA tetraloops using atomistic molecular dynamics simulations. We obtain a previously unreported estimation of the folding free energy using parallel tempering in combination with well-tempered metadynamics. A key ingredient is the use of a recently developed metric distance, eRMSD, as a biased collective variable. We find that the native fold of both tetraloops is not the global free energy minimum using the Amberχ OL3 force field. The estimated folding free energies are 30.2 ± 0.5 kJ/mol for UUCG and 7.5 ± 0.6 kJ/mol for GAGA, in striking disagreement with experimental data. We evaluate the viability of all possible one-dimensional backbone force field corrections. We find that disfavoring the gauche + region of α and ζ angles consistently improves the existing force field. The level of accuracy achieved with these corrections, however, cannot be considered sufficient by judging on the basis of available thermodynamic data and solution experiments.

  10. SU-E-T-391: Assessment and Elimination of the Angular Dependence of the Response of the NanoDot OSLD System in MV Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, J; University of Sydney, Sydney; RMIT University, Melbourne

    2014-06-01

    Purpose: Assess the angular dependence of the nanoDot OSLD system in MV X-ray beams at depths and mitigate this dependence for measurements in phantoms. Methods: Measurements for 6 MV photons at 3 cm and 10 cm depth and Monte Carlo simulations were performed. Two special holders were designed which allow a nanoDot dosimeter to be rotated around the center of its sensitive volume (5 mm diameter disk). The first holder positions the dosimeter disk perpendicular to the beam (en-face). It then rotates until the disk is parallel with the beam (edge on). This is referred to as Setup 1. Themore » second holder positions the disk parallel to the beam (edge on) for all angles (Setup 2). Monte Carlo simulations using GEANT4 considered detector and housing in detail based on microCT data. Results: An average drop in response by 1.4±0.7% (measurement) and 2.1±0.3% (Monte Carlo) for the 90° orientation compared to 0° was found for Setup 1. Monte Carlo simulations also showed a strong dependence of the effect on the composition of the sensitive layer. Assuming 100% active material (Al??O??) results in a 7% drop in response for 90° compared to 0°. Assuming the layer to be completely water, results in a flat response (within simulation uncertainty of about 1%). For Setup 2, measurements and Monte Carlo simulations found the angular dependence of the dosimeter to be below 1% and within the measurement uncertainty. Conclusion: The nanoDot dosimeter system exhibits a small angular dependence off approximately 2%. Changing the orientation of the dosimeter so that a coplanar beam arrangement always hits the detector material edge on reduces the angular dependence to within the measurement uncertainty of about 1%. This makes the dosimeter more attractive for phantom based clinical measurements and audits with multiple coplanar beams. The Australian Clinical Dosimetry Service is a joint initiative between the Australian Department of Health and the Australian Radiation Protection and Nuclear Safety Agency.« less

  11. Immune indexes of larks from desert and temperate regions show weak associations with life history but stronger links to environmental variation in microbial abundance.

    PubMed

    Horrocks, Nicholas P C; Hegemann, Arne; Matson, Kevin D; Hine, Kathryn; Jaquier, Sophie; Shobrak, Mohammed; Williams, Joseph B; Tinbergen, Joost M; Tieleman, B Irene

    2012-01-01

    Immune defense may vary as a result of trade-offs with other life-history traits or in parallel with variation in antigen levels in the environment. We studied lark species (Alaudidae) in the Arabian Desert and temperate Netherlands to test opposing predictions from these two hypotheses. Based on their slower pace of life, the trade-off hypothesis predicts relatively stronger immune defenses in desert larks compared with temperate larks. However, as predicted by the antigen exposure hypothesis, reduced microbial abundances in deserts should result in desert-living larks having relatively weaker immune defenses. We quantified host-independent and host-dependent microbial abundances of culturable microbes in ambient air and from the surfaces of birds. We measured components of immunity by quantifying concentrations of the acute-phase protein haptoglobin, natural antibody-mediated agglutination titers, complement-mediated lysis titers, and the microbicidal ability of whole blood. Desert-living larks were exposed to significantly lower concentrations of airborne microbes than temperate larks, and densities of some bird-associated microbes were also lower in desert species. Haptoglobin concentrations and lysis titers were also significantly lower in desert-living larks, but other immune indexes did not differ. Thus, contrary to the trade-off hypothesis, we found little evidence that a slow pace of life predicted increased immunological investment. In contrast, and in support of the antigen exposure hypothesis, associations between microbial exposure and some immune indexes were apparent. Measures of antigen exposure, including assessment of host-independent and host-dependent microbial assemblages, can provide novel insights into the mechanisms underlying immunological variation.

  12. Adsorption of hydrophobin on different self-assembled monolayers: the role of the hydrophobic dipole and the electric dipole.

    PubMed

    Peng, Chunwang; Liu, Jie; Zhao, Daohui; Zhou, Jian

    2014-09-30

    In this work, the adsorptions of hydrophobin (HFBI) on four different self-assembled monolayers (SAMs) (i.e., CH3-SAM, OH-SAM, COOH-SAM, and NH2-SAM) were investigated by parallel tempering Monte Carlo and molecular dynamics simulations. Simulation results indicate that the orientation of HFBI adsorbed on neutral surfaces is dominated by a hydrophobic dipole. HFBI adsorbs on the hydrophobic CH3-SAM through its hydrophobic patch and adopts a nearly vertical hydrophobic dipole relative to the surface, while it is nearly horizontal when adsorbed on the hydrophilic OH-SAM. For charged SAM surfaces, HFBI adopts a nearly vertical electric dipole relative to the surface. HFBI has the narrowest orientation distribution on the CH3-SAM, and thus can form an ordered monolayer and reverse the wettability of the surface. For HFBI adsorption on charged SAMs, the adsorption strength weakens as the surface charge density increases. Compared with those on other SAMs, a larger area of the hydrophobic patch is exposed to the solution when HFBI adsorbs on the NH2-SAM. This leads to an increase of the hydrophobicity of the surface, which is consistent with the experimental results. The binding of HFBI to the CH3-SAM is mainly through hydrophobic interactions, while it is mediated through a hydration water layer near the surface for the OH-SAM. For the charged SAM surfaces, the adsorption is mainly induced by electrostatic interactions between the charged surfaces and the oppositely charged residues. The effect of a hydrophobic dipole on protein adsorption onto hydrophobic surfaces is similar to that of an electric dipole for charged surfaces. Therefore, the hydrophobic dipole may be applied to predict the probable orientations of protein adsorbed on hydrophobic surfaces.

  13. Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System

    PubMed Central

    Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens

    2017-01-01

    Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution. PMID:28287526

  14. Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System.

    PubMed

    Dörfler, Thilo; Eilert, Tobias; Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens

    2017-02-09

    Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution.

  15. EUPDF-II: An Eulerian Joint Scalar Monte Carlo PDF Module : User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2004-01-01

    EUPDF-II provides the solution for the species and temperature fields based on an evolution equation for PDF (Probability Density Function) and it is developed mainly for application with sprays, combustion, parallel computing, and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase CFD and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with an understanding of the various models involved in the PDF formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. The source code of EUPDF-II will be available with National Combustion Code (NCC) as a complete package.

  16. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  17. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  18. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  19. The liquid-liquid transition in supercooled ST2 water: a comparison between umbrella sampling and well-tempered metadynamics.

    PubMed

    Palmer, Jeremy C; Car, Roberto; Debenedetti, Pablo G

    2013-01-01

    We investigate the metastable phase behaviour of the ST2 water model under deeply supercooled conditions. The phase behaviour is examined using umbrella sampling (US) and well-tempered metadynamics (WT-MetaD) simulations to compute the reversible free energy surface parameterized by density and bond-orientation order. We find that free energy surfaces computed with both techniques clearly show two liquid phases in coexistence, in agreement with our earlier US and grand canonical Monte Carlo calculations [Y. Liu, J. C. Palmer, A. Z. Panagiotopoulos and P. G. Debenedetti, J Chem Phys, 2012, 137, 214505; Y. Liu, A. Z. Panagiotopoulos and P. G. Debenedetti, J Chem Phys, 2009, 131, 104508]. While we demonstrate that US and WT-MetaD produce consistent results, the latter technique is estimated to be more computationally efficient by an order of magnitude. As a result, we show that WT-MetaD can be used to study the finite-size scaling behaviour of the free energy barrier separating the two liquids for systems containing 192, 300 and 400 ST2 molecules. Although our results are consistent with the expected N(2/3) scaling law, we conclude that larger systems must be examined to provide conclusive evidence of a first-order phase transition and associated second critical point.

  20. Coherence, causation, and the future of cognitive neuroscience research.

    PubMed

    Ramey, Christopher H; Chrysikou, Evangelia G

    2014-01-01

    Nachev and Hacker's conceptual analysis of the neural antecedents of voluntary action underscores the real danger of ignoring the meta-theoretical apparatus of cognitive neuroscience research. In this response, we temper certain claims (e.g., whether or not certain research questions are incoherent), consider a more extreme consequence of their argument against cognitive neuroscience (i.e., whether or not one can speak about causation with neural antecedents at all), and, finally, highlight recent methodological developments that exemplify cognitive neuroscientists' focus on studying the brain as a parallel, dynamic, and highly complex biological system.

  1. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    NASA Astrophysics Data System (ADS)

    Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald

    2017-09-01

    In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  2. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  3. Wake Encounter Analysis for a Closely Spaced Parallel Runway Paired Approach Simulation

    NASA Technical Reports Server (NTRS)

    Mckissick,Burnell T.; Rico-Cusi, Fernando J.; Murdoch, Jennifer; Oseguera-Lohr, Rosa M.; Stough, Harry P, III; O'Connor, Cornelius J.; Syed, Hazari I.

    2009-01-01

    A Monte Carlo simulation of simultaneous approaches performed by two transport category aircraft from the final approach fix to a pair of closely spaced parallel runways was conducted to explore the aft boundary of the safe zone in which separation assurance and wake avoidance are provided. The simulation included variations in runway centerline separation, initial longitudinal spacing of the aircraft, crosswind speed, and aircraft speed during the approach. The data from the simulation showed that the majority of the wake encounters occurred near or over the runway and the aft boundaries of the safe zones were identified for all simulation conditions.

  4. A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design

    NASA Technical Reports Server (NTRS)

    Wallace, Mark S.

    2015-01-01

    The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.

  5. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azadi, Sam, E-mail: s.azadi@ucl.ac.uk; Cohen, R. E.

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimalmore » VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.« less

  7. Global-view coefficients: a data management solution for parallel quantum Monte Carlo applications: A DATA MANAGEMENT SOLUTION FOR QMC APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Qingpeng; Dinan, James; Tirukkovalur, Sravya

    2016-01-28

    Quantum Monte Carlo (QMC) applications perform simulation with respect to an initial state of the quantum mechanical system, which is often captured by using a cubic B-spline basis. This representation is stored as a read-only table of coefficients and accesses to the table are generated at random as part of the Monte Carlo simulation. Current QMC applications, such as QWalk and QMCPACK, replicate this table at every process or node, which limits scalability because increasing the number of processors does not enable larger systems to be run. We present a partitioned global address space approach to transparently managing this datamore » using Global Arrays in a manner that allows the memory of multiple nodes to be aggregated. We develop an automated data management system that significantly reduces communication overheads, enabling new capabilities for QMC codes. Experimental results with QWalk and QMCPACK demonstrate the effectiveness of the data management system.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  9. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  10. A determination of relativistic shock jump conditions using Monte Carlo techniques

    NASA Technical Reports Server (NTRS)

    Ellison, Donald C.; Reynolds, Stephen P.

    1991-01-01

    Monte Carlo techniques are used, assuming isotropic elastic scattering of all particles, to calculate jump conditions in parallel relativistic collisionless shocks in the absence of Fermi acceleration. The shock velocity and compression ratios are shown for arbitrary flow velocities and for any upstream temperature. Both single-component electron-positron plasma and two-component proton-electron plasmas are considered. It is shown that protons and electrons must share energy, directly or through the mediation of plasma waves, in order to satisfy the basic conservation conditions, and the electron and proton temperatures are determined for a particular microscopic, kinetic-theory model, namely, that protons always scatter elastically. The results are directly applicable to shocks in which waves of scattering superthermal particles are absent.

  11. Effect of the surface charge discretization on electric double layers: a Monte Carlo simulation study.

    PubMed

    Madurga, Sergio; Martín-Molina, Alberto; Vilaseca, Eudald; Mas, Francesc; Quesada-Pérez, Manuel

    2007-06-21

    The structure of the electric double layer in contact with discrete and continuously charged planar surfaces is studied within the framework of the primitive model through Monte Carlo simulations. Three different discretization models are considered together with the case of uniform distribution. The effect of discreteness is analyzed in terms of charge density profiles. For point surface groups, a complete equivalence with the situation of uniformly distributed charge is found if profiles are exclusively analyzed as a function of the distance to the charged surface. However, some differences are observed moving parallel to the surface. Significant discrepancies with approaches that do not account for discreteness are reported if charge sites of finite size placed on the surface are considered.

  12. Portable LQCD Monte Carlo code using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele

    2018-03-01

    Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.

  13. Monte Carlo study of electron relaxation in graphene with spin polarized, degenerate electron gas in presence of electron-electron scattering

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-12-01

    The Monte Carlo simulation method is applied to study the relaxation of excited electrons in monolayer graphene. The presence of spin polarized background electrons population, with density corresponding to highly degenerate conditions is assumed. Formulas of electron-electron scattering rates, which properly account for electrons presence in two energetically degenerate, inequivalent valleys in this material are presented. The electron relaxation process can be divided into two phases: thermalization and cooling, which can be clearly distinguished when examining the standard deviation of electron energy distribution. The influence of the exchange effect in interactions between electrons with parallel spins is shown to be important only in transient conditions, especially during the thermalization phase.

  14. When the lowest energy does not induce native structures: parallel minimization of multi-energy values by hybridizing searching intelligences.

    PubMed

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.

  15. When the Lowest Energy Does Not Induce Native Structures: Parallel Minimization of Multi-Energy Values by Hybridizing Searching Intelligences

    PubMed Central

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708

  16. Acoustical numerology and lucky equal temperaments

    NASA Astrophysics Data System (ADS)

    Hall, Donald E.

    1988-04-01

    Equally tempered musical scales with N steps per octave are known to work especially well in approximating justly tuned intervals for such values as N=12, 19, 31, and 53. A quantitative measure of the closeness of such fits is suggested, in terms of the probabilities of coming as close to randomly chosen intervals as to the justly tuned targets. When two or more harmonic intervals are considered simultaneously, this involves a Monte Carlo evaluation of the probabilities. The results can be used to gauge how much advantage the special values of N mentioned above have over others. This article presents the rationale and method of computation, together with illustrative results in a few of the most interesting cases. References are provided to help relate these results to earlier works by music theorists.

  17. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  18. Binding Modes of Teixobactin to Lipid II: Molecular Dynamics Study.

    PubMed

    Liu, Yang; Liu, Yaxin; Chan-Park, Mary B; Mu, Yuguang

    2017-12-08

    Teixobactin (TXB) is a newly discovered antibiotic targeting the bacterial cell wall precursor Lipid II (L II ). In the present work, four binding modes of TXB on L II were identified by a contact-map based clustering method. The highly flexible binary complex ensemble was generated by parallel tempering metadynamics simulation in a well-tempered ensemble (PTMetaD-WTE). In agreement with experimental findings, the pyrophosphate group and the attached first sugar subunit of L II are found to be the minimal motif for stable TXB binding. Three of the four binding modes involve the ring structure of TXB and have relatively higher binding affinities, indicating the importance of the ring motif of TXB in L II recognition. TXB-L II complexes with a ratio of 2:1 are also predicted with configurations such that the ring motif of two TXB molecules bound to the pyrophosphate-MurNAc moiety and the glutamic acid residue of one L II , respectively. Our findings disclose that the ring motif of TXB is critical to L II binding and novel antibiotics can be designed based on its mimetics.

  19. Applying Massively Parallel Kinetic Monte Carlo Methods to Simulate Grain Growth and Sintering in Powdered Metals

    DTIC Science & Technology

    2011-09-01

    Structure Evolution During Sintering From [19]. ...................................20 Figure 10. Ising Model Configuration With Eight Nearest Neighbors...INTRODUCTION A. MOTIVATION The ability to fabricate structural components from metals with a fine (micron- sized), controlled grain size is one of the...hallmarks of modern, structural metallurgy. Powder metallurgy, in particular, consists of powder manufacture, powder blending, compacting, and sintering

  20. Scalable load balancing for massively parallel distributed Monte Carlo particle transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M. J.; Brantley, P. S.; Joy, K. I.

    2013-07-01

    In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less

  1. Adventures in Parallel Processing: Entry, Descent and Landing Simulation for the Genesis and Stardust Missions

    NASA Technical Reports Server (NTRS)

    Lyons, Daniel T.; Desai, Prasun N.

    2005-01-01

    This paper will describe the Entry, Descent and Landing simulation tradeoffs and techniques that were used to provide the Monte Carlo data required to approve entry during a critical period just before entry of the Genesis Sample Return Capsule. The same techniques will be used again when Stardust returns on January 15, 2006. Only one hour was available for the simulation which propagated 2000 dispersed entry states to the ground. Creative simulation tradeoffs combined with parallel processing were needed to provide the landing footprint statistics that were an essential part of the Go/NoGo decision that authorized release of the Sample Return Capsule a few hours before entry.

  2. LSPRAY-III: A Lagrangian Spray Module

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    2008-01-01

    LSPRAY-III is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-III, we have advanced the state-of-the-art in spray computations in several important ways.

  3. Performance of multi-hop parallel free-space optical communication over gamma-gamma fading channel with pointing errors.

    PubMed

    Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei

    2016-11-10

    Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.

  4. LSPRAY-II: A Lagrangian Spray Module

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    2004-01-01

    LSPRAY-II is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-II, we have advanced the state-of-the-art in spray computations in several important ways.

  5. Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code

    NASA Astrophysics Data System (ADS)

    Payne, J.; McCune, D.; Prater, R.

    2010-11-01

    NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.

  6. Comparison of stochastic optimization methods for all-atom folding of the Trp-Cage protein.

    PubMed

    Schug, Alexander; Herges, Thomas; Verma, Abhinav; Lee, Kyu Hwan; Wenzel, Wolfgang

    2005-12-09

    The performances of three different stochastic optimization methods for all-atom protein structure prediction are investigated and compared. We use the recently developed all-atom free-energy force field (PFF01), which was demonstrated to correctly predict the native conformation of several proteins as the global optimum of the free energy surface. The trp-cage protein (PDB-code 1L2Y) is folded with the stochastic tunneling method, a modified parallel tempering method, and the basin-hopping technique. All the methods correctly identify the native conformation, and their relative efficiency is discussed.

  7. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: a path for the optimization of low-energy many-body bases.

    PubMed

    Reboredo, Fernando A; Kim, Jeongnim

    2014-02-21

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.

  8. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases

    NASA Astrophysics Data System (ADS)

    Reboredo, Fernando A.; Kim, Jeongnim

    2014-02-01

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.

  9. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less

  10. The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.

    2014-09-15

    The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less

  11. Fish and Phytoplankton Exhibit Contrasting Temporal Species Abundance Patterns in a Dynamic North Temperate Lake

    PubMed Central

    Hansen, Gretchen J. A.; Carey, Cayelan C.

    2015-01-01

    Temporal patterns of species abundance, although less well-studied than spatial patterns, provide valuable insight to the processes governing community assembly. We compared temporal abundance distributions of two communities, phytoplankton and fish, in a north temperate lake. We used both 17 years of observed relative abundance data as well as resampled data from Monte Carlo simulations to account for the possible effects of non-detection of rare species. Similar to what has been found in other communities, phytoplankton and fish species that appeared more frequently were generally more abundant than rare species. However, neither community exhibited two distinct groups of “core” (common occurrence and high abundance) and “occasional” (rare occurrence and low abundance) species. Both observed and resampled data show that the phytoplankton community was dominated by occasional species appearing in only one year that exhibited large variation in their abundances, while the fish community was dominated by core species occurring in all 17 years at high abundances. We hypothesize that the life-history traits that enable phytoplankton to persist in highly dynamic environments may result in communities dominated by occasional species capable of reaching high abundances when conditions allow. Conversely, longer turnover times and broad environmental tolerances of fish may result in communities dominated by core species structured primarily by competitive interactions. PMID:25651399

  12. Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object

    NASA Astrophysics Data System (ADS)

    Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jeanpierre; Crespin, Sylvain

    2011-03-01

    Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects (DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning System. Large dose variations were observed particularly in air and bone equivalent material. In this current work, we discuss the results of the previous paper and provide an explanation for observed dose differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC) were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32- processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between ECLIPSE and MC calculations on the beam axis.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, Carl C.; Forget, Benoit; Smith, Kord S.

    Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less

  14. High-performance parallel computing in the classroom using the public goods game as an example

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2017-07-01

    The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.

  15. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  16. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  17. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less

  18. Information criteria for quantifying loss of reversibility in parallelized KMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less

  19. Information criteria for quantifying loss of reversibility in parallelized KMC

    NASA Astrophysics Data System (ADS)

    Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2017-01-01

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.

  20. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  1. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  2. Radiative transfer calculated from a Markov chain formalism

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.; House, L. L.

    1978-01-01

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.

  3. Genetic Algorithms and Their Application to the Protein Folding Problem

    DTIC Science & Technology

    1993-12-01

    and symbolic methods, random methods such as Monte Carlo simulation and simulated annealing, distance geometry, and molecular dynamics. Many of these...calculated energies with those obtained using the molecular simulation software package called CHARMm. 10 9) Test both the simple and parallel simpie genetic...homology-based, and simplification techniques. 3.21 Molecular Dynamics. Perhaps the most natural approach is to actually simulate the folding process. This

  4. Applications of Massive Mathematical Computations

    DTIC Science & Technology

    1990-04-01

    particles from the first principles of QCD . This problem is under intensive numerical study 11-6 using special purpose parallel supercomputers in...several places around the world. The method used here is the Monte Carlo integration for a fixed 3-D plus time lattices . Reliable results are still years...mathematical and theoretical physics, but its most promising applications are in the numerical realization of QCD computations. Our programs for the solution

  5. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  6. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  7. Parallel Fokker–Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick

    2017-01-01

    A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less

  8. Design and optimization of a portable LQCD Monte Carlo code using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.

  9. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  10. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  11. Structure of aqueous proline via parallel tempering molecular dynamics and neutron diffraction.

    PubMed

    Troitzsch, R Z; Martyna, G J; McLain, S E; Soper, A K; Crain, J

    2007-07-19

    The structure of aqueous L-proline amino acid has been the subject of much debate centering on the validity of various proposed models, differing widely in the extent to which local and long-range correlations are present. Here, aqueous proline is investigated by atomistic, replica exchange molecular dynamics simulations, and the results are compared to neutron diffraction and small angle neutron scattering (SANS) data, which have been reported recently (McLain, S.; Soper, A.; Terry, A.; Watts, A. J. Phys. Chem. B 2007, 111, 4568). Comparisons between neutron experiments and simulation are made via the static structure factor S(Q) which is measured and computed from several systems with different H/D isotopic compositions at a concentration of 1:20 molar ratio. Several different empirical water models (TIP3P, TIP4P, and SPC/E) in conjunction with the CHARMM22 force field are investigated. Agreement between experiment and simulation is reasonably good across the entire Q range although there are significant model-dependent variations in some cases. In general, agreement is improved slightly upon application of approximate quantum corrections obtained from gas-phase path integral simulations. Dimers and short oligomeric chains formed by hydrogen bonds (frequently bifurcated) coexist with apolar (hydrophobic) contacts. These emerge as the dominant local motifs in the mixture. Evidence for long-range association is more equivocal: No long-range structures form spontaneously in the MD simulations, and no obvious low-Q signature is seen in the SANS data. Moreover, associations introduced artificially to replicate a long-standing proposed mesoscale structure for proline correlations as an initial condition are annealed out by parallel tempering MD simulations. However, some small residual aggregates do remain, implying a greater degree of long-range order than is apparent in the SANS data.

  12. Relative Linkages of Stream Dissolved Oxygen with the Climatic, Hydrological, and Biogeochemical Drivers across the East Coast of U.S.A.

    NASA Astrophysics Data System (ADS)

    Abdul-Aziz, O. I.; Ahmed, S.

    2017-12-01

    Dissolved oxygen (DO) is a key indicator of stream water quality and ecosystem health. However, the temporal dynamics of stream DO is controlled by a multitude of interacting environmental drivers. The relative linkages of stream DO with the relevant environmental drivers were determined in this study across the U.S. East Coast by employing a systematic data analytics approach. The study analyzed temporal data for 51 water quality monitoring stations from USGS NWIS and EPA STORET databases. Principal component analysis and factor analysis, along with Pearson's correlation analysis, were applied to identify the interrelationships and unravel latent patterns among DO and the environmental drivers. Power law based partial least squares regression models with a bootstarp Monte-Carlo procedure (1000 iterations) were developed to reliably estimate the environmental linkages of DO by resolving multicollinearity. Based on the similarity of dominant drivers, the streams were categorized into three distinct environmental regimes. Stream DO in the northern part of temperate zone (e.g., northeast coast) had the strongest linkage with water temperature; suggesting an environmental regime with dominant climatic control. However, stream DO in the tropical zones (e.g., southeast Florida) was mostly driven by pH; indicating an environmental regime likely controlled by redox chemistry. Further, a transitional regime was found between the temperate and tropical zones, where stream DO was controlled by both water temperature and pH. The results suggested a strong effect of the climatic gradient (temperate to tropical) on stream DO along the East Coast. The identified environmental regimes and the regime-specific relative linkages provided new information on the dominant controls of coastal stream water quality dynamics. The findings would guide the planning and management of coastal stream water quality and ecosystem health across the U.S. East Coast and around the world.

  13. Sympatric parallel diversification of major oak clades in the Americas and the origins of Mexican species diversity.

    PubMed

    Hipp, Andrew L; Manos, Paul S; González-Rodríguez, Antonio; Hahn, Marlene; Kaproth, Matthew; McVay, John D; Avalos, Susana Valencia; Cavender-Bares, Jeannine

    2018-01-01

    Oaks (Quercus, Fagaceae) are the dominant tree genus of North America in species number and biomass, and Mexico is a global center of oak diversity. Understanding the origins of oak diversity is key to understanding biodiversity of northern temperate forests. A phylogenetic study of biogeography, niche evolution and diversification patterns in Quercus was performed using 300 samples, 146 species. Next-generation sequencing data were generated using the restriction-site associated DNA (RAD-seq) method. A time-calibrated maximum likelihood phylogeny was inferred and analyzed with bioclimatic, soils, and leaf habit data to reconstruct the biogeographic and evolutionary history of the American oaks. Our highly resolved phylogeny demonstrates sympatric parallel diversification in climatic niche, leaf habit, and diversification rates. The two major American oak clades arose in what is now the boreal zone and radiated, in parallel, from eastern North America into Mexico and Central America. Oaks adapted rapidly to niche transitions. The Mexican oaks are particularly numerous, not because Mexico is a center of origin, but because of high rates of lineage diversification associated with high rates of evolution along moisture gradients and between the evergreen and deciduous leaf habits. Sympatric parallel diversification in the oaks has shaped the diversity of North American forests. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  14. Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Godoy, William F.; Liu, Xu

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.

  15. LSPRAY-V: A Lagrangian Spray Module

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    2015-01-01

    LSPRAY-V is a Lagrangian spray solver developed for application with unstructured grids and massively parallel computers. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray encountered over a wide range of operating conditions in modern aircraft engine development. It could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-V, we have advanced the state-of-the-art in spray computations in several important ways.

  16. Hardness of H13 Tool Steel After Non-isothermal Tempering

    NASA Astrophysics Data System (ADS)

    Nelson, E.; Kohli, A.; Poirier, D. R.

    2018-04-01

    A direct method to calculate the tempering response of a tool steel (H13) that exhibits secondary hardening is presented. Based on the traditional method of presenting tempering response in terms of isothermal tempering, we show that the tempering response for a steel undergoing a non-isothermal tempering schedule can be predicted. Experiments comprised (1) isothermal tempering, (2) non-isothermal tempering pertaining to a relatively slow heating to process-temperature and (3) fast-heating cycles that are relevant to tempering by induction heating. After establishing the tempering response of the steel under simple isothermal conditions, the tempering response can be applied to non-isothermal tempering by using a numerical method to calculate the tempering parameter. Calculated results are verified by the experiments.

  17. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  18. Imaging skin pathologies with polarized light: Empirical and theoretical studies

    NASA Astrophysics Data System (ADS)

    Ramella-Roman, Jessica C.

    The use of polarized light imaging can facilitate the determination of skin cancer borders before a Mohs surgery procedure. Linearly polarized light that illuminates the skin is backscattered by superficial layers where cancer often arises and is randomized by the collagen fibers. The superficially backscattered light can be distinguished from the diffused reflected light using a detector analyzer that is sequentially oriented parallel and perpendicular to the source polarization. A polarized image pol = parallel - perpendicular / parallel + perpendicular is generated. This image has a higher contrast to the superficial skin layers than simple total reflectance images. Pilot clinical trials were conducted with a small hand-held device for the accumulation of a library of lesions to establish the efficacy of polarized light imaging in vivo. It was found that melanoma exhibits a high contrast to polarized light imaging as well as basal and sclerosing cell carcinoma. Mechanisms of polarized light scattering from different tissues and tissue phantoms were studied in vitro. Parameters such as depth of depolarization (DOD), retardance, and birefringence were studied in theory and experimentally. Polarized light traveling through different tissues (skin, muscle, and liver) depolarized after a few hundred microns. Highly birefringent materials such as skin (DOD = 300 mum 696nm) and muscle (DOD = 370 mum 696nm) depolarized light faster than less birefringent materials such as liver (DOD = 700 mum 696nm). Light depolarization can also be attributed to scattering. Three Monte Carlo programs for modeling polarized light transfer into scattering media were implemented to evaluate these mechanisms. Simulations conducted with the Monte Carlo programs showed that small diameter spheres have different mechanisms of depolarization than larger ones. The models also showed that the anisotropy parameter g strongly influences the depolarization mechanism. (Abstract shortened by UMI.)

  19. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  20. THE IBEX RIBBON AND THE PICKUP ION RING STABILITY IN THE OUTER HELIOSHEATH. II. MONTE-CARLO AND PARTICLE-IN-CELL MODEL RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemiec, J.; Florinski, V.; Heerikhuisen, J.

    2016-08-01

    The nearly circular ribbon of energetic neutral atom (ENA) emission discovered by NASA’s Interplanetary Boundary EXplorer satellite ( IBEX ), is most commonly attributed to the effect of charge exchange of secondary pickup ions (PUIs) gyrating about the magnetic field in the outer heliosheath (OHS) and the interstellar space beyond. The first paper in the series (Paper I) presented a theoretical analysis of the pickup process in the OHS and hybrid-kinetic simulations, revealing that the kinetic properties of freshly injected proton rings depend sensitively on the details of their velocity distribution. It was demonstrated that only rings that are notmore » too narrow (parallel thermal spread above a few km s{sup −1}) and not too wide (parallel temperature smaller than the core plasma temperature) could remain stable for a period of time long enough to generate ribbon ENAs. This paper investigates the role of electron dynamics and the extra spatial degree of freedom in the ring ion scattering process with the help of two-dimensional full particle-in-cell (PIC) kinetic simulations. A good agreement is observed between ring evolution under unstable conditions in hybrid and PIC models, and the dominant modes are found to propagate parallel to the magnetic field. We also present more realistic ribbon PUI distributions generated using Monte Carlo simulations of atomic hydrogen in the global heliosphere and examine the effect of both the cold ring-like and the hot “halo” PUIs produced from heliosheath ENAs on the ring stability. It is shown that the second PUI population enhances the fluctuation growth rate, leading to faster isotropization of the solar-wind-derived ring ions.« less

  1. The IBEX Ribbon and the Pickup Ion Ring Stability in the Outer Heliosheath II. Monte-Carlo and Particle-in-cell Model Results

    NASA Astrophysics Data System (ADS)

    Niemiec, J.; Florinski, V.; Heerikhuisen, J.; Nishikawa, K.-I.

    2016-08-01

    The nearly circular ribbon of energetic neutral atom (ENA) emission discovered by NASA’s Interplanetary Boundary EXplorer satellite (IBEX), is most commonly attributed to the effect of charge exchange of secondary pickup ions (PUIs) gyrating about the magnetic field in the outer heliosheath (OHS) and the interstellar space beyond. The first paper in the series (Paper I) presented a theoretical analysis of the pickup process in the OHS and hybrid-kinetic simulations, revealing that the kinetic properties of freshly injected proton rings depend sensitively on the details of their velocity distribution. It was demonstrated that only rings that are not too narrow (parallel thermal spread above a few km s-1) and not too wide (parallel temperature smaller than the core plasma temperature) could remain stable for a period of time long enough to generate ribbon ENAs. This paper investigates the role of electron dynamics and the extra spatial degree of freedom in the ring ion scattering process with the help of two-dimensional full particle-in-cell (PIC) kinetic simulations. A good agreement is observed between ring evolution under unstable conditions in hybrid and PIC models, and the dominant modes are found to propagate parallel to the magnetic field. We also present more realistic ribbon PUI distributions generated using Monte Carlo simulations of atomic hydrogen in the global heliosphere and examine the effect of both the cold ring-like and the hot “halo” PUIs produced from heliosheath ENAs on the ring stability. It is shown that the second PUI population enhances the fluctuation growth rate, leading to faster isotropization of the solar-wind-derived ring ions.

  2. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  3. A Wigner Monte Carlo approach to density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.

    2014-08-01

    In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales verymore » well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.« less

  4. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboredo, Fernando A.; Kim, Jeongnim

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspacemore » of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.« less

  5. Monte Carlo dose calculations of beta-emitting sources for intravascular brachytherapy: a comparison between EGS4, EGSnrc, and MCNP.

    PubMed

    Wang, R; Li, X A

    2001-02-01

    The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.

  6. Simulated Annealing in the Variable Landscape

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Kim, Chang Ju

    An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.

  7. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  8. Build-up and surface dose measurements on phantoms using micro-MOSFET in 6 and 10 MV x-ray beams and comparisons with Monte Carlo calculations.

    PubMed

    Xiang, Hong F; Song, Jun S; Chin, David W H; Cormack, Robert A; Tishler, Roy B; Makrigiorgos, G Mike; Court, Laurence E; Chin, Lee M

    2007-04-01

    This work is intended to investigate the application and accuracy of micro-MOSFET for superficial dose measurement under clinically used MV x-ray beams. Dose response of micro-MOSFET in the build-up region and on surface under MV x-ray beams were measured and compared to Monte Carlo calculations. First, percentage-depth-doses were measured with micro-MOSFET under 6 and 10 MV beams of normal incidence onto a flat solid water phantom. Micro-MOSFET data were compared with the measurements from a parallel plate ionization chamber and Monte Carlo dose calculation in the build-up region. Then, percentage-depth-doses were measured for oblique beams at 0 degrees-80 degrees onto the flat solid water phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm below the surface. Measurements were compared to Monte Carlo calculations under these settings. Finally, measurements were performed with micro-MOSFET embedded in the first 1 mm layer of bolus placed on a flat phantom and a curved phantom of semi-cylindrical shape. Results were compared to superficial dose calculated from Monte Carlo for a 2 mm thin layer that extends from the surface to a depth of 2 mm. Results were (1) Comparison of measurements with MC calculation in the build-up region showed that micro-MOSFET has a water-equivalence thickness (WET) of 0.87 mm for 6 MV beam and 0.99 mm for 10 MV beam from the flat side, and a WET of 0.72 mm for 6 MV beam and 0.76 mm for 10 MV beam from the epoxy side. (2) For normal beam incidences, percentage depth dose agree within 3%-5% among micro-MOSFET measurements, parallel-plate ionization chamber measurements, and MC calculations. (3) For oblique incidence on the flat phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm, measurements were consistent with MC calculations within a typical uncertainty of 3%-5%. (4) For oblique incidence on the flat phantom and a curved-surface phantom, measurements with micro-MOSFET placed at 1.0 mm agrees with the MC calculation within 6%, including uncertainties of micro-MOSFET measurements of 2%-3% (1 standard deviation), MOSFET angular dependence of 3.0%-3.5%, and 1%-2% systematical error due to phantom setup geometry asymmetry. Micro-MOSFET can be used for skin dose measurements in 6 and 10 MV beams with an estimated accuracy of +/- 6%.

  9. A study on the suitability of the PTW microDiamond detector for kilovoltage x-ray beam dosimetry.

    PubMed

    Damodar, Joshita; Odgers, David; Pope, Dane; Hill, Robin

    2018-05-01

    Kilovoltage x-ray beams are widely used in treating skin cancers and in biological irradiators. In this work, we have evaluated four dosimeters (ionization chambers and solid state detectors) in their suitability for relative dosimetry of kilovoltage x-ray beams in the energy range of 50 - 280kVp. The solid state detectors, which have not been investigated with low energy x-rays, were the PTW 60019 microDiamond synthetic diamond detector and the PTW 60012 diode. The two ionization chambers used were the PTW Advanced Markus parallel plate chamber and the PTW PinPoint small volume chamber. For each of the dosimeters, percentage depth doses were measured in water over the full range of x-ray beams and for field sizes ranging from 2cm diameter to 12 × 12cm. In addition, depth doses were measured for a narrow aperture (7mm diameter) using the PTW microDiamond detector. For comparison, the measured data was compared with Monte Carlo calculated doses using the EGSnrc Monte Carlo package. The depth dose results indicate that the Advanced Markus parallel plate and PinPoint ionization chambers were suitable for depth dose measurements in the beam quality range with an uncertainty of less than 3%, including in the regions closer to the surface of the water as compared with Monte Carlo depth dose data for all six energy beams. The response of the PTW Diode E detector was accurate to within 4% for all field sizes in the energy range of 50-125kVp but showed larger variations for higher energies of up to 12% with the 12 × 12cm field size. In comparison, the microDiamond detector had good agreement over all energies for both smaller and larger field sizes generally within 1% as compared to the Advanced Markus chamber field and Monte Carlo calculations. The only exceptions were in measuring the dose at the surface of the water phantom where larger differences were found. For the 7mm diameter field, the agreement between the microDiamond detector and Monte Carlo calculations was good being better than 1% except at the surface. Based on these results, the PTW microDiamond detector has shown to be a suitable detector for relative dosimetry of low energy x-ray beams over a wide range of x-ray beam energies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Unbiased Rare Event Sampling in Spatial Stochastic Systems Biology Models Using a Weighted Ensemble of Trajectories

    PubMed Central

    Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.

    2016-01-01

    The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334

  11. Parallel Solver for Diffuse Optical Tomography on Realistic Head Models With Scattering and Clear Regions.

    PubMed

    Placati, Silvio; Guermandi, Marco; Samore, Andrea; Scarselli, Eleonora Franchi; Guerrieri, Roberto

    2016-09-01

    Diffuse optical tomography is an imaging technique, based on evaluation of how light propagates within the human head to obtain the functional information about the brain. Precision in reconstructing such an optical properties map is highly affected by the accuracy of the light propagation model implemented, which needs to take into account the presence of clear and scattering tissues. We present a numerical solver based on the radiosity-diffusion model, integrating the anatomical information provided by a structural MRI. The solver is designed to run on parallel heterogeneous platforms based on multiple GPUs and CPUs. We demonstrate how the solver provides a 7 times speed-up over an isotropic-scattered parallel Monte Carlo engine based on a radiative transport equation for a domain composed of 2 million voxels, along with a significant improvement in accuracy. The speed-up greatly increases for larger domains, allowing us to compute the light distribution of a full human head ( ≈ 3 million voxels) in 116 s for the platform used.

  12. Optical interconnection using polyimide waveguide for multichip module

    NASA Astrophysics Data System (ADS)

    Koyanagi, Mitsumasa

    1996-01-01

    We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ringbus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection arid the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.

  13. Optical interconnection using polyimide waveguide for multichip module

    NASA Astrophysics Data System (ADS)

    Koyanagi, Mitsumasa

    1996-01-01

    We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ring-bus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection and the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.

  14. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  15. Synchronous parallel spatially resolved stochastic cluster dynamics

    DOE PAGES

    Dunn, Aaron; Dingreville, Rémi; Martínez, Enrique; ...

    2016-04-23

    In this work, a spatially resolved stochastic cluster dynamics (SRSCD) model for radiation damage accumulation in metals is implemented using a synchronous parallel kinetic Monte Carlo algorithm. The parallel algorithm is shown to significantly increase the size of representative volumes achievable in SRSCD simulations of radiation damage accumulation. Additionally, weak scaling performance of the method is tested in two cases: (1) an idealized case of Frenkel pair diffusion and annihilation, and (2) a characteristic example problem including defect cluster formation and growth in α-Fe. For the latter case, weak scaling is tested using both Frenkel pair and displacement cascade damage.more » To improve scaling of simulations with cascade damage, an explicit cascade implantation scheme is developed for cases in which fast-moving defects are created in displacement cascades. For the first time, simulation of radiation damage accumulation in nanopolycrystals can be achieved with a three dimensional rendition of the microstructure, allowing demonstration of the effect of grain size on defect accumulation in Frenkel pair-irradiated α-Fe.« less

  16. Tempered fractional calculus

    NASA Astrophysics Data System (ADS)

    Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua

    2015-07-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  17. TEMPERED FRACTIONAL CALCULUS.

    PubMed

    Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  18. TEMPERED FRACTIONAL CALCULUS

    PubMed Central

    MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA

    2014-01-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690

  19. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  20. Constraining mass anomalies in the interior of spherical bodies using Trans-dimensional Bayesian Hierarchical inference.

    NASA Astrophysics Data System (ADS)

    Izquierdo, K.; Lekic, V.; Montesi, L.

    2017-12-01

    Gravity inversions are especially important for planetary applications since measurements of the variations in gravitational acceleration are often the only constraint available to map out lateral density variations in the interiors of planets and other Solar system objects. Currently, global gravity data is available for the terrestrial planets and the Moon. Although several methods for inverting these data have been developed and applied, the non-uniqueness of global density models that fit the data has not yet been fully characterized. We make use of Bayesian inference and a Reversible Jump Markov Chain Monte Carlo (RJMCMC) approach to develop a Trans-dimensional Hierarchical Bayesian (THB) inversion algorithm that yields a large sample of models that fit a gravity field. From this group of models, we can determine the most likely value of parameters of a global density model and a measure of the non-uniqueness of each parameter when the number of anomalies describing the gravity field is not fixed a priori. We explore the use of a parallel tempering algorithm and fast multipole method to reduce the number of iterations and computing time needed. We applied this method to a synthetic gravity field of the Moon and a long wavelength synthetic model of density anomalies in the Earth's lower mantle. We obtained a good match between the given gravity field and the gravity field produced by the most likely model in each inversion. The number of anomalies of the models showed parsimony of the algorithm, the value of the noise variance of the input data was retrieved, and the non-uniqueness of the models was quantified. Our results show that the ability to constrain the latitude and longitude of density anomalies, which is excellent at shallow locations (<200 km), decreases with increasing depth. With higher computational resources, this THB method for gravity inversion could give new information about the overall density distribution of celestial bodies even when there is no other geophysical data available.

  1. A Bayesian approach to modeling 2D gravity data using polygon states

    NASA Astrophysics Data System (ADS)

    Titus, W. J.; Titus, S.; Davis, J. R.

    2015-12-01

    We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.

  2. Frequency Analysis of Extreme Sub-Daily Precipitation under Stationary and Non-Stationary Conditions across Two Contrasting Hydroclimatic Environments

    NASA Astrophysics Data System (ADS)

    Demaria, E. M.; Goodrich, D. C.; Keefer, T.

    2017-12-01

    Observed sub-daily precipitation intensities from contrasting hydroclimatic environments in the USA are used to evaluate temporal trends and to develop Intensity-Duration Frequency (IDF) curves under stationary and nonstationary climatic conditions. Analyses are based on observations from two United States Department of Agriculture (USDA)-Agricultural Research Service (ARS) experimental watersheds located in a semi-arid and a temperate environment. We use an Annual Maximum Series (AMS) and a Partial Duration Series (PDS) approach to identify temporal trends in maximum intensities for durations ranging from 5- to 1440-minutes. A Bayesian approach with Monte Carlo techniques is used to incorporate the effect of non-stationary climatic assumptions in the IDF curves. The results show increasing trends in observed AMS sub-daily intensities in both watersheds whereas trends in the PDS observations are mostly positive in the semi-arid site and a mix of positive and negative in the temperate site. Stationary climate assumptions lead to much lower estimated sub-daily intensities than those under non-stationary assumptions with larger absolute differences found for shorter durations and smaller return periods. The risk of failure (R) of a hydraulic structure is increased for non-stationary effects over those of stationary effects, with absolute differences of 25% for a 100-year return period (T) and a project life (n) of 100 years. The study highlights the importance of considering non-stationarity, due to natural variability or to climate change, in storm design.

  3. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  4. Solid-propellant rocket motor ballistic performance variation analyses

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.

    1975-01-01

    Results are presented of research aimed at improving the assessment of off-nominal internal ballistic performance including tailoff and thrust imbalance of two large solid-rocket motors (SRMs) firing in parallel. Previous analyses using the Monte Carlo technique were refined to permit evaluation of the effects of radial and circumferential propellant temperature gradients. Sample evaluations of the effect of the temperature gradients are presented. A separate theoretical investigation of the effect of strain rate on the burning rate of propellant indicates that the thermoelastic coupling may cause substantial variations in burning rate during highly transient operating conditions. The Monte Carlo approach was also modified to permit the effects on performance of variation in the characteristics between lots of propellants and other materials to be evaluated. This permits the variabilities for the total SRM population to be determined. A sample case shows, however, that the effect of these between-lot variations on thrust imbalances within pairs of SRMs is minor in compariosn to the effect of the within-lot variations. The revised Monte Carlo and design analysis computer programs along with instructions including format requirements for preparation of input data and illustrative examples are presented.

  5. Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudheer, C. D.; Krishnan, S.; Srinivasan, A.

    Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receivingmore » at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.« less

  6. Forward Monte Carlo Computations of Polarized Microwave Radiation

    NASA Technical Reports Server (NTRS)

    Battaglia, A.; Kummerow, C.

    2000-01-01

    Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.

  7. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  8. Vertical Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

  9. Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters

    NASA Astrophysics Data System (ADS)

    Esler, Kenneth

    2011-03-01

    Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.

  10. PHAST: Protein-like heteropolymer analysis by statistical thermodynamics

    NASA Astrophysics Data System (ADS)

    Frigori, Rafael B.

    2017-06-01

    PHAST is a software package written in standard Fortran, with MPI and CUDA extensions, able to efficiently perform parallel multicanonical Monte Carlo simulations of single or multiple heteropolymeric chains, as coarse-grained models for proteins. The outcome data can be straightforwardly analyzed within its microcanonical Statistical Thermodynamics module, which allows for computing the entropy, caloric curve, specific heat and free energies. As a case study, we investigate the aggregation of heteropolymers bioinspired on Aβ25-33 fragments and their cross-seeding with IAPP20-29 isoforms. Excellent parallel scaling is observed, even under numerically difficult first-order like phase transitions, which are properly described by the built-in fully reconfigurable force fields. Still, the package is free and open source, this shall motivate users to readily adapt it to specific purposes.

  11. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  12. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE PAGES

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.; ...

    2017-07-31

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  13. Tempered fractional calculus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less

  14. Reference dosimetry of proton pencil beams based on dose-area product: a proof of concept.

    PubMed

    Gomà, Carles; Safai, Sairos; Vörös, Sándor

    2017-06-21

    This paper describes a novel approach to the reference dosimetry of proton pencil beams based on dose-area product ([Formula: see text]). It depicts the calibration of a large-diameter plane-parallel ionization chamber in terms of dose-area product in a 60 Co beam, the Monte Carlo calculation of beam quality correction factors-in terms of dose-area product-in proton beams, the Monte Carlo calculation of nuclear halo correction factors, and the experimental determination of [Formula: see text] of a single proton pencil beam. This new approach to reference dosimetry proves to be feasible, as it yields [Formula: see text] values in agreement with the standard and well-established approach of determining the absorbed dose to water at the centre of a broad homogeneous field generated by the superposition of regularly-spaced proton pencil beams.

  15. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  16. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  17. Towards the reliable calculation of residence time for off-lattice kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Alexander, Kathleen C.; Schuh, Christopher A.

    2016-08-01

    Kinetic Monte Carlo (KMC) methods have the potential to extend the accessible timescales of off-lattice atomistic simulations beyond the limits of molecular dynamics by making use of transition state theory and parallelization. However, it is a challenge to identify a complete catalog of events accessible to an off-lattice system in order to accurately calculate the residence time for KMC. Here we describe possible approaches to some of the key steps needed to address this problem. These include methods to compare and distinguish individual kinetic events, to deterministically search an energy landscape, and to define local atomic environments. When applied to the ground state  ∑5(2 1 0) grain boundary in copper, these methods achieve a converged residence time, accounting for the full set of kinetically relevant events for this off-lattice system, with calculable uncertainty.

  18. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  19. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE PAGES

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    2017-12-22

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  20. A comparative study of noisy signal evolution in 2R all-optical regenerators with normal and anomalous average dispersions using an accelerated Multicanonical Monte Carlo method.

    PubMed

    Lakoba, Taras I; Vasilyev, Michael

    2008-10-27

    In [Opt. Express 15, 10061 (2007)] we proposed a new regime of multichannel all-optical regeneration that required anomalous average dispersion. This regime is superior to the previously studied normal-dispersion regime when signal distortions are deterministic in their temporal shape. However, there was a concern that the regenerator with anomalous average dispersion may be prone to noise amplification via modulational instability. Here, we show that this, in general, is not the case. Moreover, in the range of input powers that is of interest for multichannel regeneration, the device with anomalous average dispersion may even provide less noise amplification than the one with normal dispersion. These results are obtained with an improved version of the parallelized modification of the Multicanonical Monte Carlo method proposed in [IEEE J. Sel. Topics Quantum Electron. 14, 599 (2008)].

  1. Monte Carlo simulations of particle acceleration at oblique shocks: Including cross-field diffusion

    NASA Technical Reports Server (NTRS)

    Baring, M. G.; Ellison, D. C.; Jones, F. C.

    1995-01-01

    The Monte Carlo technique of simulating diffusive particle acceleration at shocks has made spectral predictions that compare extremely well with particle distributions observed at the quasi-parallel region of the earth's bow shock. The current extension of this work to compare simulation predictions with particle spectra at oblique interplanetary shocks has required the inclusion of significant cross-field diffusion (strong scattering) in the simulation technique, since oblique shocks are intrinsically inefficient in the limit of weak scattering. In this paper, we present results from the method we have developed for the inclusion of cross-field diffusion in our simulations, namely model predictions of particle spectra downstream of oblique subluminal shocks. While the high-energy spectral index is independent of the shock obliquity and the strength of the scattering, the latter is observed to profoundly influence the efficiency of injection of cosmic rays into the acceleration process.

  2. Estimation of snow albedo reduction by light absorbing impurities using Monte Carlo radiative transfer model

    NASA Astrophysics Data System (ADS)

    Sengupta, D.; Gao, L.; Wilcox, E. M.; Beres, N. D.; Moosmüller, H.; Khlystov, A.

    2017-12-01

    Radiative forcing and climate change greatly depends on earth's surface albedo and its temporal and spatial variation. The surface albedo varies greatly depending on the surface characteristics ranging from 5-10% for calm ocean waters to 80% for some snow-covered areas. Clean and fresh snow surfaces have the highest albedo and are most sensitive to contamination with light absorbing impurities that can greatly reduce surface albedo and change overall radiative forcing estimates. Accurate estimation of snow albedo as well as understanding of feedbacks on climate from changes in snow-covered areas is important for radiative forcing, snow energy balance, predicting seasonal snowmelt, and run off rates. Such information is essential to inform timely decision making of stakeholders and policy makers. Light absorbing particles deposited onto the snow surface can greatly alter snow albedo and have been identified as a major contributor to regional climate forcing if seasonal snow cover is involved. However, uncertainty associated with quantification of albedo reduction by these light absorbing particles is high. Here, we use Mie theory (under the assumption of spherical snow grains) to reconstruct the single scattering parameters of snow (i.e., single scattering albedo ῶ and asymmetry parameter g) from observation-based size distribution information and retrieved refractive index values. The single scattering parameters of impurities are extracted with the same approach from datasets obtained during laboratory combustion of biomass samples. Instead of using plane-parallel approximation methods to account for multiple scattering, we have used the simple "Monte Carlo ray/photon tracing approach" to calculate the snow albedo. This simple approach considers multiple scattering to be the "collection" of single scattering events. Using this approach, we vary the effective snow grain size and impurity concentrations to explore the evolution of snow albedo over a wide wavelength range (300 nm - 2000 nm). Results will be compared with the SNICAR model to better understand the differences in snow albedo computation between plane-parallel methods and the statistical Monte Carlo methods.

  3. Molecular Properties by Quantum Monte Carlo: An Investigation on the Role of the Wave Function Ansatz and the Basis Set in the Water Molecule

    PubMed Central

    Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo

    2014-01-01

    Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Titt, U; Suzuki, K

    Purpose: The PTCH is preparing the ocular proton beam nozzle for clinical use. Currently commissioning measurements are being performed using films, diodes and ionization chambers. In parallel, a Monte Carlo model of the beam line was created for integration into the automated Monte Carlo treatment plan computation system, MC{sup 2}. This work aims to compare Monte Carlo predictions to measured proton doses in order to validate the Monte Carlo model. Methods: A complete model of the double scattering ocular beam line has been created and is capable of simulating proton beams with a comprehensive set of beam modifying devices, includingmore » eleven different range modulator wheels. Simulations of doses in water were scored and compare to ion chamber measurements of depth doses, lateral dose profiles extracted from half beam block exposures of films, and diode measurements of lateral penumbrae at various depths. Results: All comparison resulted in an average relative entrance dose difference of less than 3% and peak dose difference of less than 2%. All range differences were smaller than 0.2 mm. The differences in the lateral beam profiles were smaller than 0.2 mm, and the differences in the penumbrae were all smaller than 0.4%. Conclusion: All available data shows excellent agreement of simulations and measurements. More measurements will have to be performed in order to completely and systematically validate the model. Besides simulating and measuring PDDs and lateral profiles of all remaining range modulator wheels, the absolute dosimetry factors in terms of number of source protons per monitor unit have to be determined.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less

  6. Long-term variation of total ozone

    NASA Astrophysics Data System (ADS)

    Kane, R. P.

    1988-03-01

    The long-term variation of total ozone is studied for 1957 up to date for different latitude zones. The 3-year running averages show that, apart from a small portion showing parallelism with sunspot cycles, the trends in different latitude zones are dissimilar. In particular, where northern latitudes show a rising trend, the southern latitudes show an opposite (decreasing) trend. In the north-temperate group, Europe, North America and Asia show dissimilar trends. The longer data series (1932 ownards) for Arosa shows, besides a solar-cycle-dependent component, a steady level during 1932 1953 and a down-trend thereafter up to date. Very localised but long-lasting circulation patterns, different in different geographical regions, are indicated.

  7. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    NASA Astrophysics Data System (ADS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.

    2006-06-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).

  8. Decision-directed detector for overlapping PCM/NRZ signals.

    NASA Technical Reports Server (NTRS)

    Wang, C. D.; Noack, T. L.

    1973-01-01

    A decision-directed (DD) technique for the detection of overlapping PCM/NRZ signals in the presence of white Gaussian noise is investigated. The performance of the DD detector is represented by probability of error Pe versus input signal-to-noise ratio (SNR). To examine how much improvement in performance can be achieved with this technique, Pe's with and without DD feedback are evaluated in parallel. Further, analytical results are compared with those found by Monte Carlo simulations. The results are in good agreement.

  9. Error analysis of Dobson spectrophotometer measurements of the total ozone content

    NASA Technical Reports Server (NTRS)

    Holland, A. C.; Thomas, R. W. L.

    1975-01-01

    A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.

  10. Evolution of a minimal parallel programming model

    DOE PAGES

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-04-30

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less

  11. Parallel ecological networks in ecosystems

    PubMed Central

    Olff, Han; Alonso, David; Berg, Matty P.; Eriksson, B. Klemens; Loreau, Michel; Piersma, Theunis; Rooney, Neil

    2009-01-01

    In ecosystems, species interact with other species directly and through abiotic factors in multiple ways, often forming complex networks of various types of ecological interaction. Out of this suite of interactions, predator–prey interactions have received most attention. The resulting food webs, however, will always operate simultaneously with networks based on other types of ecological interaction, such as through the activities of ecosystem engineers or mutualistic interactions. Little is known about how to classify, organize and quantify these other ecological networks and their mutual interplay. The aim of this paper is to provide new and testable ideas on how to understand and model ecosystems in which many different types of ecological interaction operate simultaneously. We approach this problem by first identifying six main types of interaction that operate within ecosystems, of which food web interactions are one. Then, we propose that food webs are structured among two main axes of organization: a vertical (classic) axis representing trophic position and a new horizontal ‘ecological stoichiometry’ axis representing decreasing palatability of plant parts and detritus for herbivores and detrivores and slower turnover times. The usefulness of these new ideas is then explored with three very different ecosystems as test cases: temperate intertidal mudflats; temperate short grass prairie; and tropical savannah. PMID:19451126

  12. Energy allocation and reproductive investment in a temperate protogynous hermaphrodite, the ballan wrasse Labrus bergylta

    NASA Astrophysics Data System (ADS)

    Villegas-Ríos, David; Alonso-Fernández, Alexandre; Domínguez-Petit, Rosario; Saborido-Rey, Fran

    2014-02-01

    Energy allocation is an important component of life-history variation since it determines the tradeoff between growth and reproduction. In this study we investigated the state-dependent and sex-specific energy allocation pattern and the reproductive investment of a protogynous hermaphrodite fish with parental care. Individuals of Labrus bergylta, a temperate wrasse displaying two main different colour patterns (plain and spotted), were obtained from the fish markets in NW Spain between 2009 and 2012. Total energy of the gonad, liver, mesenteric fat and muscle (obtained by calorimetric analysis) and gut weight (as a proxy of feeding intensity) were modelled in relation to the reproductive phase of the individuals. A decrease in the energy stored as mesenteric fat from prespawning to spawning paralleled the increase in the gonad total energy in the same period. The predicted reduction in stored total energy over the reproductive cycle was higher than the energy required to develop the ovaries for the full range of female sizes analysed, suggesting a capital breeding strategy. Males stored less energy over a season and invested fewer resources in gamete production than females. Reproductive investment (both fecundity and energy required to produce the gonads) was higher in plain than in spotted females, which is in agreement with the different growth patterns described for the species.

  13. Pollutant removal in a multi-stage municipal wastewater treatment system comprised of constructed wetlands and a maturation pond, in a temperate climate.

    PubMed

    Rivas, A; Barceló-Quintal, I; Moeller, G E

    2011-01-01

    A multi-stage municipal wastewater treatment system is proposed to comply with Mexican standards for discharge into receiving water bodies. The system is located in Santa Fe de la Laguna, Mexico, an area with a temperate climate. It was designed for 2,700 people equivalent (259.2 m3/d) and consists of a preliminary treatment, a septic tank as well as two modules operating in parallel, each consisting of a horizontal subsurface-flow wetland, a maturation pond and a vertical flow polishing wetland. After two years of operation, on-site research was performed. An efficient biochemical oxygen demand (BOD5) (94-98%), chemical oxygen demand (91-93%), total suspended solids (93-97%), total Kjeldahl nitrogen (56-88%) and fecal coliform (4-5 logs) removal was obtained. Significant phosphorus removal was not accomplished in this study (25-52%). Evapotranspiration was measured in different treatment units. This study demonstrates that during the dry season wastewater treatment by this multi-stage system cannot comply with the limits established by Mexican standards for receiving water bodies type 'C'. However, it has demonstrated the system's potential for less restrictive uses such as agricultural irrigation, recreation and provides the opportunity for wastewater treatment in rural areas without electric energy.

  14. Fast-cycling unit of root turnover in perennial herbaceous plants in a cold temperate ecosystem

    NASA Astrophysics Data System (ADS)

    Sun, Kai; Luke McCormack, M.; Li, Le; Ma, Zeqing; Guo, Dali

    2016-01-01

    Roots of perennial plants have both persistent portion and fast-cycling units represented by different levels of branching. In woody species, the distal nonwoody branch orders as a unit are born and die together relatively rapidly (within 1-2 years). However, whether the fast-cycling units also exist in perennial herbs is unknown. We monitored root demography of seven perennial herbs over two years in a cold temperate ecosystem and we classified the largest roots on the root collar or rhizome as basal roots, and associated finer laterals as secondary, tertiary and quaternary roots. Parallel to woody plants in which distal root orders form a fast-cycling module, basal root and its finer laterals also represent a fast-cycling module in herbaceous plants. Within this module, basal roots had a lifespan of 0.5-2 years and represented 62-87% of total root biomass, thus dominating annual root turnover (60%-81% of the total). Moreover, root traits including root length, tissue density, and biomass were useful predictors of root lifespan. We conclude that both herbaceous and woody plants have fast-cycling modular units and future studies identifying the fast-cycling module across plant species should allow better understanding of how root construction and turnover are linked to whole-plant strategies.

  15. Identifying local-scale wilderness for on-ground conservation actions within a global biodiversity hotspot

    PubMed Central

    Lin, Shiwei; Wu, Ruidong; Hua, Chaolang; Ma, Jianzhong; Wang, Wenli; Yang, Feiling; Wang, Junjun

    2016-01-01

    Protecting wilderness areas (WAs) is a crucial proactive approach to sustain biodiversity. However, studies identifying local-scale WAs for on-ground conservation efforts are still very limited. This paper investigated the spatial patterns of wilderness in a global biodiversity hotspot – Three Parallel Rivers Region (TPRR) in southwest China. Wilderness was classified into levels 1 to 10 based on a cluster analysis of five indicators, namely human population density, naturalness, fragmentation, remoteness, and ruggedness. Only patches characterized by wilderness level 1 and ≥1.0 km2 were considered WAs. The wilderness levels in the northwest were significantly higher than those in the southeast, and clearly increased with the increase in elevation. The WAs covered approximately 25% of TPRR’s land, 89.3% of which was located in the >3,000 m elevation zones. WAs consisted of 20 vegetation types, among which temperate conifer forest, cold temperate shrub and alpine ecosystems covered 79.4% of WAs’ total area. Most WAs were still not protected yet by existing reserves. Topography and human activities are the primary influencing factors on the spatial patterns of wilderness. We suggest establishing strictly protected reserves for most large WAs, while some sustainable management approaches might be more optimal solutions for many highly fragmented small WAs. PMID:27181186

  16. Physics Computing '92: Proceedings of the 4th International Conference

    NASA Astrophysics Data System (ADS)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants

  17. Predicting Flows of Rarefied Gases

    NASA Technical Reports Server (NTRS)

    LeBeau, Gerald J.; Wilmoth, Richard G.

    2005-01-01

    DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.

  18. Current Status on the use of Parallel Computing in Turbulent Reacting Flow Computations Involving Sprays, Monte Carlo PDF and Unstructured Grids. Chapter 4

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.

  19. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  20. Parallelized Monte Carlo software to efficiently simulate the light propagation in arbitrarily shaped objects and aligned scattering media.

    PubMed

    Zoller, Christian Johannes; Hohmann, Ansgar; Foschum, Florian; Geiger, Simeon; Geiger, Martin; Ertl, Thomas Peter; Kienle, Alwin

    2018-06-01

    A GPU-based Monte Carlo software (MCtet) was developed to calculate the light propagation in arbitrarily shaped objects, like a human tooth, represented by a tetrahedral mesh. A unique feature of MCtet is a concept to realize different kinds of light-sources illuminating the complex-shaped surface of an object, for which no preprocessing step is needed. With this concept, it is also possible to consider photons leaving a turbid media and reentering again in case of a concave object. The correct implementation was shown by comparison with five other Monte Carlo software packages. A hundredfold acceleration compared with central processing units-based programs was found. MCtet can simulate anisotropic light propagation, e.g., by accounting for scattering at cylindrical structures. The important influence of the anisotropic light propagation, caused, e.g., by the tubules in human dentin, is shown for the transmission spectrum through a tooth. It was found that the sensitivity to a change in the oxygen saturation inside the pulp for transmission spectra is much larger if the tubules are considered. Another "light guiding" effect based on a combination of a low scattering and a high refractive index in enamel is described. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  1. Optimization of Monte Carlo dose calculations: The interface problem

    NASA Astrophysics Data System (ADS)

    Soudentas, Edward

    1998-05-01

    High energy photon beams are widely used for radiation treatment of deep-seated tumors. The human body contains many types of interfaces between dissimilar materials that affect dose distribution in radiation therapy. Experimentally, significant radiation dose perturbations has been observed at such interfaces. The EGS4 Monte Carlo code was used to calculate dose perturbations at boundaries between dissimilar materials (such as bone/water) for 60Co and 6 MeV linear accelerator beams using a UNIX workstation. A simple test of the reliability of a random number generator was also developed. A systematic study of the adjustable parameters in EGS4 was performed in order to minimize calculational artifacts at boundaries. Calculations of dose perturbations at boundaries between different materials showed that there is a 12% increase in dose at water/bone interface, and a 44% increase in dose at water/copper interface. with the increase mainly due to electrons produced in water and backscattered from the high atomic number material. The dependence of the dose increase on the atomic number was also investigated. The clinically important case of using two parallel opposed beams for radiation therapy was investigated where increased doses at boundaries has been observed. The Monte Carlo calculations can provide accurate dosimetry data under conditions of electronic non-equilibrium at tissue interfaces.

  2. Monte Carlo Calculations of F-region Incoherent Radar Spectra at High Latitudes: the Effect of O+-O+ Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Barghouthi, I.; Barakat, A.

    We have used Monte Carlo simulations of O+ velocity distributions in the high latitude F-region to improve the calculation of incoherent radar spectra in auroral ionosphere. The Monte Carlo simulation includes ion-neutral O+ -- O resonant charge exchange and polarization interactions as well as Coulomb self-collisions O+ -- O+. At a few hundreds kilometers of altitude, atomic oxygen O and atomic oxygen ion O+ dominate the composition of the auroral ionosphere and, consequently, the influence of O+ -- O+ Coulomb collisions becomes significant. In this study we consider the effect of O+ -- O+ collisions on the incoherent radar spectra in the presence of large electric field (˜ 100 mVm-1). As altitude increases, (i.e. the role of O+ -- O+ becomes significant), the 1-D O+ ion velocity distribution function becomes more Maxwellian and the features of the radar spectrum corresponding to non-Maxwellian ion velocity distribution (e.g. baby bottle and triple hump shapes) evolve to Maxwellian ion velocity distribution (single and double hump shapes). Therefore, O+ -- O+ Coulomb collisions act to istropize the 1-D O+ velocity distribution, and modify the radar spectrum accordingly, by transferring thermal energy from the perpendicular direction to the parallel direction.

  3. MECHANICAL PROPERTIES OF TYPE 410 EXPERIMENTAL MOTOR TUBES TEMPERED AT 1150 F. Includes WAPD CTA(MEE)-510, Attachment (A): 1150 F TEMPERED TYPE 410 STAINLESS STEEL CORROSION PROGRAM. Attachment (B): 1150 F TEMPERED TYPE 410 STAINLESS STEEL METALLURGICAL EVALUATION PROGRAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faduska, A.; Rau, E.; Alger, J.V.

    Data are given on the corrosion properties of type 410 stainless steel tempered at 1150 d F. Control mechanismn-drive motor tubes and some outer housings are constructed of 650 d F tempered type 410 stainless steel. Since the stress corrosion resistance of type 410 in the 1150 d F tempered condition is superior, the utilization of the 1150 d F tempered material is more desirable for this application. The properties of 410 stainless steel hardened and tempered at 1150 d F are given. (W.L.H.)

  4. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)

  5. Parallel Evolution of Cold Tolerance within Drosophila melanogaster

    PubMed Central

    Braun, Dylan T.; Lack, Justin B.

    2017-01-01

    Drosophila melanogaster originated in tropical Africa before expanding into strikingly different temperate climates in Eurasia and beyond. Here, we find elevated cold tolerance in three distinct geographic regions: beyond the well-studied non-African case, we show that populations from the highlands of Ethiopia and South Africa have significantly increased cold tolerance as well. We observe greater cold tolerance in outbred versus inbred flies, but only in populations with higher inversion frequencies. Each cold-adapted population shows lower inversion frequencies than a closely-related warm-adapted population, suggesting that inversion frequencies may decrease with altitude in addition to latitude. Using the FST-based “Population Branch Excess” statistic (PBE), we found only limited evidence for parallel genetic differentiation at the scale of ∼4 kb windows, specifically between Ethiopian and South African cold-adapted populations. And yet, when we looked for single nucleotide polymorphisms (SNPs) with codirectional frequency change in two or three cold-adapted populations, strong genomic enrichments were observed from all comparisons. These findings could reflect an important role for selection on standing genetic variation leading to “soft sweeps”. One SNP showed sufficient codirectional frequency change in all cold-adapted populations to achieve experiment-wide significance: an intronic variant in the synaptic gene Prosap. Another codirectional outlier SNP, at senseless-2, had a strong association with our cold trait measurements, but in the opposite direction as predicted. More generally, proteins involved in neurotransmission were enriched as potential targets of parallel adaptation. The ability to study cold tolerance evolution in a parallel framework will enhance this classic study system for climate adaptation. PMID:27777283

  6. Texture and Tempered Condition Combined Effects on Fatigue Behavior in an Al-Cu-Li Alloy

    NASA Astrophysics Data System (ADS)

    Wang, An; Liu, Zhiyi; Liu, Meng; Wu, Wenting; Bai, Song; Yang, Rongxian

    2017-05-01

    Texture and tempered condition combined effects on fatigue behavior in an Al-Cu-Li alloy have been investigated using tensile testing, cyclic loading testing, scanning electron microscope (SEM), transmission electron microscopy (TEM) and texture analysis. Results showed that in near-threshold region, T4-tempered samples possessed the lowest fatigue crack propagation (FCP) rate. In Paris regime, T4-tempered sample had similar FCP rate with T6-tempered sample. T83-tempered sample exhibited the greatest FCP rate among the three tempered conditions. 3% pre-stretching in T83-tempered sample resulted in a reducing intensity of Goss texture and facilitated T1 precipitation. SEM results showed that less crack deflection was observed in T83-tempered sample, as compared to other two tempered samples. It was the combined effects of a lower intensity of Goss texture and T1 precipitates retarding the reversible dislocation slipping in the plastic zone ahead the crack tip.

  7. Defining the developmental parameters of temper loss in early childhood: implications for developmental psychopathology

    PubMed Central

    Wakschlag, Lauren S.; Choi, Seung W.; Carter, Alice S.; Hullsiek, Heide; Burns, James; McCarthy, Kimberly; Leibenluft, Ellen; Briggs-Gowan, Margaret J.

    2013-01-01

    Background Temper modulation problems are both a hallmark of early childhood and a common mental health concern. Thus, characterizing specific behavioral manifestations of temper loss along a dimension from normative misbehaviors to clinically significant problems is an important step toward identifying clinical thresholds. Methods Parent-reported patterns of temper loss were delineated in a diverse community sample of preschoolers (n = 1,490). A developmentally sensitive questionnaire, the Multidimensional Assessment of Preschool Disruptive Behavior (MAP-DB), was used to assess temper loss in terms of tantrum features and anger regulation. Specific aims were: (a) document the normative distribution of temper loss in preschoolers from normative misbehaviors to clinically concerning temper loss behaviors, and test for sociodemographic differences; (b) use Item Response Theory (IRT) to model a Temper Loss dimension; and (c) examine associations of temper loss and concurrent emotional and behavioral problems. Results Across sociodemographic subgroups, a unidimensional Temper Loss model fit the data well. Nearly all (83.7%) preschoolers had tantrums sometimes but only 8.6% had daily tantrums. Normative misbehaviors occurred more frequently than clinically concerning temper loss behaviors. Milder behaviors tended to reflect frustration in expectable contexts, whereas clinically concerning problem indicators were unpredictable, prolonged, and/or destructive. In multivariate models, Temper Loss was associated with emotional and behavioral problems. Conclusions Parent reports on a developmentally informed questionnaire, administered to a large and diverse sample, distinguished normative and problematic manifestations of preschool temper loss. A developmental, dimensional approach shows promise for elucidating the boundaries between normative early childhood temper loss and emergent psychopathology. PMID:22928674

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cebe, M; Pacaci, P; Mabhouti, H

    Purpose: In this study, the two available calculation algorithms of the Varian Eclipse treatment planning system(TPS), the electron Monte Carlo(eMC) and General Gaussian Pencil Beam(GGPB) algorithms were used to compare measured and calculated peripheral dose distribution of electron beams. Methods: Peripheral dose measurements were carried out for 6, 9, 12, 15, 18 and 22 MeV electron beams of Varian Triology machine using parallel plate ionization chamber and EBT3 films in the slab phantom. Measurements were performed for 6×6, 10×10 and 25×25cm{sup 2} cone sizes at dmax of each energy up to 20cm beyond the field edges. Using the same filmmore » batch, the net OD to dose calibration curve was obtained for each energy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution measured using parallel plate ionization chamber and EBT3 film and calculated by eMC and GGPB algorithms were compared. The measured and calculated data were then compared to find which algorithm calculates peripheral dose distribution more accurately. Results: The agreement between measurement and eMC was better than GGPB. The TPS underestimated the out of field doses. The difference between measured and calculated doses increase with the cone size. The largest deviation between calculated and parallel plate ionization chamber measured dose is less than 4.93% for eMC, but it can increase up to 7.51% for GGPB. For film measurement, the minimum gamma analysis passing rates between measured and calculated dose distributions were 98.2% and 92.7% for eMC and GGPB respectively for all field sizes and energies. Conclusion: Our results show that the Monte Carlo algorithm for electron planning in Eclipse is more accurate than previous algorithms for peripheral dose distributions. It must be emphasized that the use of GGPB for planning large field treatments with 6 MeV could lead to inaccuracies of clinical significance.« less

  9. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  10. Phylogeographic differentiation versus transcriptomic adaptation to warm temperatures in Zostera marina, a globally important seagrass.

    PubMed

    Jueterbock, A; Franssen, S U; Bergmann, N; Gu, J; Coyer, J A; Reusch, T B H; Bornberg-Bauer, E; Olsen, J L

    2016-11-01

    Populations distributed across a broad thermal cline are instrumental in addressing adaptation to increasing temperatures under global warming. Using a space-for-time substitution design, we tested for parallel adaptation to warm temperatures along two independent thermal clines in Zostera marina, the most widely distributed seagrass in the temperate Northern Hemisphere. A North-South pair of populations was sampled along the European and North American coasts and exposed to a simulated heatwave in a common-garden mesocosm. Transcriptomic responses under control, heat stress and recovery were recorded in 99 RNAseq libraries with ~13 000 uniquely annotated, expressed genes. We corrected for phylogenetic differentiation among populations to discriminate neutral from adaptive differentiation. The two southern populations recovered faster from heat stress and showed parallel transcriptomic differentiation, as compared with northern populations. Among 2389 differentially expressed genes, 21 exceeded neutral expectations and were likely involved in parallel adaptation to warm temperatures. However, the strongest differentiation following phylogenetic correction was between the three Atlantic populations and the Mediterranean population with 128 of 4711 differentially expressed genes exceeding neutral expectations. Although adaptation to warm temperatures is expected to reduce sensitivity to heatwaves, the continued resistance of seagrass to further anthropogenic stresses may be impaired by heat-induced downregulation of genes related to photosynthesis, pathogen defence and stress tolerance. © 2016 John Wiley & Sons Ltd.

  11. NOTE: MCDE: a new Monte Carlo dose engine for IMRT

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; DeSmedt, B.; Coghe, M.; Paelinck, L.; Van Duyse, B.; DeGersem, W.; DeWagter, C.; DeNeve, W.; Thierens, H.

    2004-07-01

    A new accurate Monte Carlo code for IMRT dose computations, MCDE (Monte Carlo dose engine), is introduced. MCDE is based on BEAMnrc/DOSXYZnrc and consequently the accurate EGSnrc electron transport. DOSXYZnrc is reprogrammed as a component module for BEAMnrc. In this way both codes are interconnected elegantly, while maintaining the BEAM structure and only minimal changes to BEAMnrc.mortran are necessary. The treatment head of the Elekta SLiplus linear accelerator is modelled in detail. CT grids consisting of up to 200 slices of 512 × 512 voxels can be introduced and up to 100 beams can be handled simultaneously. The beams and CT data are imported from the treatment planning system GRATIS via a DICOM interface. To enable the handling of up to 50 × 106 voxels the system was programmed in Fortran95 to enable dynamic memory management. All region-dependent arrays (dose, statistics, transport arrays) were redefined. A scoring grid was introduced and superimposed on the geometry grid, to be able to limit the number of scoring voxels. The whole system uses approximately 200 MB of RAM and runs on a PC cluster consisting of 38 1.0 GHz processors. A set of in-house made scripts handle the parallellization and the centralization of the Monte Carlo calculations on a server. As an illustration of MCDE, a clinical example is discussed and compared with collapsed cone convolution calculations. At present, the system is still rather slow and is intended to be a tool for reliable verification of IMRT treatment planning in the case of the presence of tissue inhomogeneities such as air cavities.

  12. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: a path for the optimization of low-energy many-body basis expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jeongnim; Reboredo, Fernando A.

    The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systemsmore » near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.« less

  13. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  14. Quantum Monte Carlo Endstation for Petascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlomore » code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.« less

  15. STOCHASTIC INTEGRATION FOR TEMPERED FRACTIONAL BROWNIAN MOTION.

    PubMed

    Meerschaert, Mark M; Sabzikar, Farzad

    2014-07-01

    Tempered fractional Brownian motion is obtained when the power law kernel in the moving average representation of a fractional Brownian motion is multiplied by an exponential tempering factor. This paper develops the theory of stochastic integrals for tempered fractional Brownian motion. Along the way, we develop some basic results on tempered fractional calculus.

  16. Anomalous structural transition of confined hard squares.

    PubMed

    Gurin, Péter; Varga, Szabolcs; Odriozola, Gerardo

    2016-11-01

    Structural transitions are examined in quasi-one-dimensional systems of freely rotating hard squares, which are confined between two parallel walls. We find two competing phases: one is a fluid where the squares have two sides parallel to the walls, while the second one is a solidlike structure with a zigzag arrangement of the squares. Using transfer matrix method we show that the configuration space consists of subspaces of fluidlike and solidlike phases, which are connected with low probability microstates of mixed structures. The existence of these connecting states makes the thermodynamic quantities continuous and precludes the possibility of a true phase transition. However, thermodynamic functions indicate strong tendency for the phase transition and our replica exchange Monte Carlo simulation study detects several important markers of the first order phase transition. The distinction of a phase transition from a structural change is practically impossible with simulations and experiments in such systems like the confined hard squares.

  17. Free-air ionization chamber, FAC-IR-300, designed for medium energy X-ray dosimetry

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-01-01

    The primary standard for X-ray photons is based on parallel-plate free-air ionization chamber (FAC). Therefore, the Atomic Energy Organization of Iran (AEOI) is tried to design and build the free-air ionization chamber, FAC-IR-300, for low and medium energy X-ray dosimetry. The main aim of the present work is to investigate specification of the FAC-IR-300 ionization chamber and design it. FAC-IR-300 dosimeter is composed of two parallel plates, a high voltage (HV) plate and a collector plate, along with a guard electrode that surrounds the collector plate. The guard plate and the collector were separated by an air gap. For obtaining uniformity in the electric field distribution, a group of guard strips was used around the ionization chamber. These characterizations involve determining the exact dimensions of the ionization chamber by using Monte Carlo simulation and introducing correction factors.

  18. Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, Surendra N.

    1994-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

  19. Dependence of the prompt fission γ-ray spectrum on the entrance channel of compound nucleus: Spontaneous vs. neutron-induced fission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chyzh, A.; Jaffke, P.; Wu, C. Y.

    Prompt γ-ray spectra were measured for the spontaneous fission of 240,242Pu and the neutron-induced fission of 239,241Pu with incident neutron energies ranging from thermal to about 100 keV. Measurements were made using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments using a parallel-plate avalanche counter. The unfolded prompt fission γ-ray energy spectra can be reproduced reasonably well by Monte Carlo Hauser–Feshbach statistical model for the neutron-induced fission channel but not for the spontaneous fission channel. However, this entrance-channel dependence of the prompt fission γ-ray emission can be described qualitatively by themore » model due to the very different fission-fragment mass distributions and a lower average fragment spin for spontaneous fission. The description of measurements and the discussion of results under the framework of a Monte Carlo Hauser–Feshbach statistical approach are presented.« less

  20. A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo

    DOE PAGES

    Zhao, Luning; Neuscamman, Eric

    2017-05-17

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  1. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsoulakis, Markos

    2014-08-09

    Our two key accomplishments in the first three years were towards the development of, (1) a mathematically rigorous and at the same time computationally flexible framework for parallelization of Kinetic Monte Carlo methods, and its implementation on GPUs, and (2) spatial multilevel coarse-graining methods for Monte Carlo sampling and molecular simulation. A common underlying theme in both these lines of our work is the development of numerical methods which are at the same time both computationally efficient and reliable, the latter in the sense that they provide controlled-error approximations for coarse observables of the simulated molecular systems. Finally, our keymore » accomplishment in the last year of the grant is that we started developing (3) pathwise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of nonequilibrium extended (high-dimensional) systems. We discuss these three research directions in some detail below, along with the related publications.« less

  2. Monte Carlo shock-like solutions to the Boltzmann equation with collective scattering

    NASA Technical Reports Server (NTRS)

    Ellison, D. C.; Eichler, D.

    1984-01-01

    The results of Monte Carlo simulations of steady state shocks generated by a collision operator that isotropizes the particles by means of elastic scattering in some locally defined frame of reference are presented. The simulations include both the back reaction of accelerated particles on the inflowing plasma and the free escape of high-energy particles from finite shocks. Energetic particles are found to be naturally extracted out of the background plasma by the shock process with an efficiency in good quantitative agreement with an earlier analytic approximation (Eichler, 1983 and 1984) and observations (Gosling et al., 1981) of the entire particle spectrum at a quasi-parallel interplanetary shock. The analytic approximation, which allows a self-consistent determination of the effective adiabatic index of the shocked gas, is used to calculate the overall acceleration efficiency and particle spectrum for cases where ultrarelativistic energies are obtained. It is found that shocks of the strength necessary to produce galactic cosmic rays put approximately 15 percent of the shock energy into relativistic particles.

  3. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less

  4. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less

  5. James Webb Space Telescope Initial Mid-Course Correction Monte Carlo Implementation using Task Parallelism

    NASA Technical Reports Server (NTRS)

    Petersen, Jeremy; Tichy, Jason; Wawrzyniak, Geoffrey; Richon, Karen

    2014-01-01

    The James Webb Space Telescope will be launched into a highly elliptical orbit that does not possess sufficient energy to achieve a proper Sun-Earth L2 libration point orbit. Three mid-course correction (MCC) maneuvers are planned to rectify the energy deficit: MCC-1a, MCC-1b, and MCC-2. To validate the propellant budget and trajectory design methods, a set of Monte Carlo analyses that incorporate MCC maneuver modeling and execution are employed. The first analysis focuses on the effects of launch vehicle injection errors on the magnitude of MCC-1a. The second on the spread of potential V based on the performance of the propulsion system as applied to all three MCC maneuvers. The final highlights the slight, but notable, contribution of the attitude thrusters during each MCC maneuver. Given the possible variations in these three scenarios, the trajectory design methods are determined to be robust to errors in the modeling of the flight system.

  6. Dynamical traps in Wang-Landau sampling of continuous systems: Mechanism and solution

    NASA Astrophysics Data System (ADS)

    Koh, Yang Wei; Sim, Adelene Y. L.; Lee, Hwee Kuan

    2015-08-01

    We study the mechanism behind dynamical trappings experienced during Wang-Landau sampling of continuous systems reported by several authors. Trapping is caused by the random walker coming close to a local energy extremum, although the mechanism is different from that of the critical slowing-down encountered in conventional molecular dynamics or Monte Carlo simulations. When trapped, the random walker misses the entire or even several stages of Wang-Landau modification factor reduction, leading to inadequate sampling of the configuration space and a rough density of states, even though the modification factor has been reduced to very small values. Trapping is dependent on specific systems, the choice of energy bins, and the Monte Carlo step size, making it highly unpredictable. A general, simple, and effective solution is proposed where the configurations of multiple parallel Wang-Landau trajectories are interswapped to prevent trapping. We also explain why swapping frees the random walker from such traps. The efficacy of the proposed algorithm is demonstrated.

  7. CTRANS: A Monte Carlo program for radiative transfer in plane parallel atmospheres with imbedded finite clouds: Development, testing and user's guide

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.

  8. Efficient 3D kinetic Monte Carlo method for modeling of molecular structure and dynamics.

    PubMed

    Panshenskov, Mikhail; Solov'yov, Ilia A; Solov'yov, Andrey V

    2014-06-30

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system. Copyright © 2014 Wiley Periodicals, Inc.

  9. Dependence of the prompt fission γ-ray spectrum on the entrance channel of compound nucleus: Spontaneous vs. neutron-induced fission

    DOE PAGES

    Chyzh, A.; Jaffke, P.; Wu, C. Y.; ...

    2018-06-07

    Prompt γ-ray spectra were measured for the spontaneous fission of 240,242Pu and the neutron-induced fission of 239,241Pu with incident neutron energies ranging from thermal to about 100 keV. Measurements were made using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments using a parallel-plate avalanche counter. The unfolded prompt fission γ-ray energy spectra can be reproduced reasonably well by Monte Carlo Hauser–Feshbach statistical model for the neutron-induced fission channel but not for the spontaneous fission channel. However, this entrance-channel dependence of the prompt fission γ-ray emission can be described qualitatively by themore » model due to the very different fission-fragment mass distributions and a lower average fragment spin for spontaneous fission. The description of measurements and the discussion of results under the framework of a Monte Carlo Hauser–Feshbach statistical approach are presented.« less

  10. James Webb Space Telescope Initial Mid-Course Correction Monte Carlo Implementation using Task Parallelism

    NASA Technical Reports Server (NTRS)

    Petersen, Jeremy; Tichy, Jason; Wawrzyniak, Geoffrey; Richon, Karen

    2014-01-01

    The James Webb Space Telescope will be launched into a highly elliptical orbit that does not possess sufficient energy to achieve a proper Sun-Earth/Moon L2 libration point orbit. Three mid-course correction (MCC) maneuvers are planned to rectify the energy deficit: MCC-1a, MCC-1b, and MCC-2. To validate the propellant budget and trajectory design methods, a set of Monte Carlo analyses that incorporate MCC maneuver modeling and execution are employed. The first analysis focuses on the effects of launch vehicle injection errors on the magnitude of MCC-1a. The second on the spread of potential V based on the performance of the propulsion system as applied to all three MCC maneuvers. The final highlights the slight, but notable, contribution of the attitude thrusters during each MCC maneuver. Given the possible variations in these three scenarios, the trajectory design methods are determined to be robust to errors in the modeling of the flight system.

  11. Electron heating in quasi-perpendicular shocks - A Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Veltri, Pierluigi; Mangeney, Andre; Scudder, Jack D.

    1990-01-01

    To study the problem of electron heating in quasi-perpendicular shocks, under the combined effects of 'reversible' motion, in the shock electric potential and magnetic field, and wave-particle interactions a diffusion equation was derived, in the drift (adiabatic) approximation and it was solved by using a Monte Carlo method. The results show that most of the observations can be explained within this framework. The simulation has also definitively shown that the electron parallel temperature is determined by the dc electromagnetic field and not by any wave particle induced heating. Wave-particle interactions are effective in smoothing out the large gradients in phase space produced by the 'reversible' motion of the electrons, thus producing a 'cooling' of the electrons. Some constraints on the wave-particle interaction process may be obtained from a detailed comparison between the simulation and observations. In particular, it appears that the adiabatic approximation must be violated in order to explain the observed evolution of the perpendicular temperature.

  12. The Feasibility of Studying 44Ti(α, p)47V Reaction at Astrophysical Energies

    NASA Astrophysics Data System (ADS)

    Al-Abdullah, Tariq; Bemmerer, D.; Elekes, Z.; Schumann, D.

    2018-01-01

    The gamma-ray lines from the decay of 44Ti have been observed by space-based gamma-ray telescopes from two supernova remnants. It is believed that the 44Ti(α, p)47V reaction dominates the destruction of 44Ti. This work presents a possible technique to determine its reaction rate in forward kinematics at astrophysically relevant energies. Several online and offline measurements in parallel with Monte Carlo simulations were performed to illustrate the feasibility of performing this reaction. The results will be discussed.

  13. A Bayesian nonparametric approach to dynamical noise reduction

    NASA Astrophysics Data System (ADS)

    Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2018-06-01

    We propose a Bayesian nonparametric approach for the noise reduction of a given chaotic time series contaminated by dynamical noise, based on Markov Chain Monte Carlo methods. The underlying unknown noise process (possibly) exhibits heavy tailed behavior. We introduce the Dynamic Noise Reduction Replicator model with which we reconstruct the unknown dynamic equations and in parallel we replicate the dynamics under reduced noise level dynamical perturbations. The dynamic noise reduction procedure is demonstrated specifically in the case of polynomial maps. Simulations based on synthetic time series are presented.

  14. Cyclotron line resonant transfer through neutron star atmospheres

    NASA Technical Reports Server (NTRS)

    Wang, John C. L.; Wasserman, Ira M.; Salpeter, Edwin E.

    1988-01-01

    Monte Carlo methods are used to study in detail the resonant radiative transfer of cyclotron line photons with recoil through a purely scattering neutron star atmosphere for both the polarized and unpolarized cases. For each case, the number of scatters, the path length traveled, the escape frequency shift, the escape direction cosine, the emergent frequency spectra, and the angular distribution of escaping photons are investigated. In the polarized case, transfer is calculated using both the cold plasma e- and o-modes and the magnetic vacuum perpendicular and parallel modes.

  15. Fruiting and flushing phenology in Asian tropical and temperate forests: implications for primate ecology.

    PubMed

    Hanya, Goro; Tsuji, Yamato; Grueter, Cyril C

    2013-04-01

    In order to understand the ecological adaptations of primates to survive in temperate forests, we need to know the general patterns of plant phenology in temperate and tropical forests. Comparative analyses have been employed to investigate general trends in the seasonality and abundance of fruit and young leaves in tropical and temperate forests. Previous studies have shown that (1) fruit fall biomass in temperate forest is lower than in tropical forest, (2) non-fleshy species, in particular acorns, comprise the majority of the fruit biomass in temperate forest, (3) the duration of the fruiting season is shorter in temperate forest, and (4) the fruiting peak occurs in autumn in most temperate forests. Through our comparative analyses of the fruiting and flushing phenology between Asian temperate and tropical forests, we revealed that (1) fruiting is more annually periodic (the pattern in one year is similar to that seen in the next year) in temperate forest in terms of the number of fruiting species or trees, (2) there is no consistent difference in interannual variations in fruiting between temperate and tropical forests, although some oak-dominated temperate forests exhibit extremely large interannual variations in fruiting, (3) the timing of the flushing peak is predictable (in spring and early summer), and (4) the duration of the flushing season is shorter. The flushing season in temperate forests (17-28 % of that in tropical forests) was quite limited, even compared to the fruiting season (68 %). These results imply that temperate primates need to survive a long period of scarcity of young leaves and fruits, but the timing is predictable. Therefore, a dependence on low-quality foods, such as mature leaves, buds, bark, and lichens, would be indispensable for temperate primates. Due to the high predictability of the timing of fruiting and flushing in temperate forests, fat accumulation during the fruit-abundant period and fat metabolization during the subsequent fruit-scarce period can be an effective strategy to survive the lean period (winter).

  16. μ-tempered metadynamics: Artifact independent convergence times for wide hills

    NASA Astrophysics Data System (ADS)

    Dickson, Bradley M.

    2015-12-01

    Recent analysis of well-tempered metadynamics (WTmetaD) showed that it converges without mollification artifacts in the bias potential. Here, we explore how metadynamics heals mollification artifacts, how healing impacts convergence time, and whether alternative temperings may be used to improve efficiency. We introduce "μ-tempered" metadynamics as a simple tempering scheme, inspired by a related mollified adaptive biasing potential, that results in artifact independent convergence of the free energy estimate. We use a toy model to examine the role of artifacts in WTmetaD and solvated alanine dipeptide to compare the well-tempered and μ-tempered frameworks demonstrating fast convergence for hill widths as large as 60∘ for μTmetaD.

  17. μ-tempered metadynamics: Artifact independent convergence times for wide hills.

    PubMed

    Dickson, Bradley M

    2015-12-21

    Recent analysis of well-tempered metadynamics (WTmetaD) showed that it converges without mollification artifacts in the bias potential. Here, we explore how metadynamics heals mollification artifacts, how healing impacts convergence time, and whether alternative temperings may be used to improve efficiency. We introduce "μ-tempered" metadynamics as a simple tempering scheme, inspired by a related mollified adaptive biasing potential, that results in artifact independent convergence of the free energy estimate. We use a toy model to examine the role of artifacts in WTmetaD and solvated alanine dipeptide to compare the well-tempered and μ-tempered frameworks demonstrating fast convergence for hill widths as large as 60(∘) for μTmetaD.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Y.; Collaborative Innovation Center for Advanced Ship and Deep-Sea Exploration, Shanghai 200240; Li, W., E-mail: weilee@sjtu.edu.cn

    Low temperature tempering is important in improving the mechanical properties of steels. In this study, the thermoelectric power method was employed to investigate carbon segregation during low temperature tempering ranging from 110 °C to 170 °C of a medium carbon alloyed steel, combined with micro-hardness, transmission electron microscopy and atom probe tomography. Evolution of carbon dissolution from martensite and segregation to grain boundaries/interfaces and dislocations were investigated for different tempering conditions. Carbon concentration variation was quantified from 0.33 wt.% in quenching sample to 0.15 wt.% after long time tempering. The kinetic of carbon diffusion during tempering process was discussed throughmore » Johnson-Mehl-Avrami equation. - Highlights: • The thermoelectric power (TEP) was employed to investigate the low temperature tempering of a medium carbon alloyed steel. • Evolution of carbon dissolution was investigated for different tempering conditions. • Carbon concentration variation was quantified from 0.33 wt.% in quenching sample to 0.15 wt.% after long time tempering.« less

  19. SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Jinfeng; Cao, Ruifen; Dai, Yumei

    Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less

  20. Exhaustively sampling peptide adsorption with metadynamics.

    PubMed

    Deighan, Michael; Pfaendtner, Jim

    2013-06-25

    Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.

  1. Prion protein β2–α2 loop conformational landscape

    PubMed Central

    Caldarulo, Enrico; Wüthrich, Kurt; Parrinello, Michele

    2017-01-01

    In transmissible spongiform encephalopathies (TSEs), which are lethal neurodegenerative diseases that affect humans and a wide range of other mammalian species, the normal “cellular” prion protein (PrPC) is transformed into amyloid aggregates representing the “scrapie form” of the protein (PrPSc). Continued research on this system is of keen interest, since new information on the physiological function of PrPC in healthy organisms is emerging, as well as new data on the mechanism of the transformation of PrPC to PrPSc. In this paper we used two different approaches: a combination of the well-tempered ensemble (WTE) and parallel tempering (PT) schemes and metadynamics (MetaD) to characterize the conformational free-energy surface of PrPC. The focus of the data analysis was on an 11-residue polypeptide segment in mouse PrPC(121–231) that includes the β2–α2 loop of residues 167–170, for which a correlation between structure and susceptibility to prion disease has previously been described. This study includes wild-type mouse PrPC and a variant with the single-residue replacement Y169A. The resulting detailed conformational landscapes complement in an integrative manner the available experimental data on PrPC, providing quantitative insights into the nature of the structural transition-related function of the β2–α2 loop. PMID:28827331

  2. Prion protein β2-α2 loop conformational landscape.

    PubMed

    Caldarulo, Enrico; Barducci, Alessandro; Wüthrich, Kurt; Parrinello, Michele

    2017-09-05

    In transmissible spongiform encephalopathies (TSEs), which are lethal neurodegenerative diseases that affect humans and a wide range of other mammalian species, the normal "cellular" prion protein ([Formula: see text]) is transformed into amyloid aggregates representing the "scrapie form" of the protein ([Formula: see text]). Continued research on this system is of keen interest, since new information on the physiological function of [Formula: see text] in healthy organisms is emerging, as well as new data on the mechanism of the transformation of [Formula: see text] to [Formula: see text] In this paper we used two different approaches: a combination of the well-tempered ensemble (WTE) and parallel tempering (PT) schemes and metadynamics (MetaD) to characterize the conformational free-energy surface of [Formula: see text] The focus of the data analysis was on an 11-residue polypeptide segment in mouse [Formula: see text](121-231) that includes the [Formula: see text]2-[Formula: see text]2 loop of residues 167-170, for which a correlation between structure and susceptibility to prion disease has previously been described. This study includes wild-type mouse [Formula: see text] and a variant with the single-residue replacement Y169A. The resulting detailed conformational landscapes complement in an integrative manner the available experimental data on [Formula: see text], providing quantitative insights into the nature of the structural transition-related function of the [Formula: see text]2-[Formula: see text]2 loop.

  3. Olfactory foraging in temperate waters: sensitivity to dimethylsulphide of shearwaters in the Atlantic Ocean and Mediterranean Sea.

    PubMed

    Dell'Ariccia, Gaia; Célérier, Aurélie; Gabirot, Marianne; Palmas, Pauline; Massa, Bruno; Bonadonna, Francesco

    2014-05-15

    Many procellariiforms use olfactory cues to locate food patches over the seemingly featureless ocean surface. In particular, some of them are able to detect and are attracted by dimethylsulphide (DMS), a volatile compound naturally occurring over worldwide oceans in correspondence with productive feeding areas. However, current knowledge is restricted to sub-Antarctic species and to only one study realized under natural conditions at sea. Here, for the first time, we investigated the response to DMS in parallel in two different environments in temperate waters, the Atlantic Ocean and the Mediterranean Sea, employing Cory's (Calonectris borealis) and Scopoli's (Calonectris diomedea) shearwaters as models. To test whether these birds can detect and respond to DMS, we presented them with this substance in a Y-maze. Then, to determine whether they use this molecule in natural conditions, we tested the response to DMS at sea. The number of birds that chose DMS in the Y-maze and that were recruited at DMS-scented slicks at sea suggests that these shearwaters are attracted to DMS in both non-foraging and natural contexts. Our findings show that the use of DMS as a foraging cue may be a strategy adopted by procellariiforms across oceans but that regional differences may exist, giving a worldwide perspective to previous hypotheses concerning the use of DMS as a chemical cue. © 2014. Published by The Company of Biologists Ltd.

  4. Effects of Polymer Conjugation on Hybridization Thermodynamics of Oligonucleic Acids.

    PubMed

    Ghobadi, Ahmadreza F; Jayaraman, Arthi

    2016-09-15

    In this work, we perform coarse-grained (CG) and atomistic simulations to study the effects of polymer conjugation on hybridization/melting thermodynamics of oligonucleic acids (ONAs). We present coarse-grained Langevin molecular dynamics simulations (CG-NVT) to assess the effects of the polymer flexibility, length, and architecture on hybridization/melting of ONAs with different ONA duplex sequences, backbone chemistry, and duplex concentration. In these CG-NVT simulations, we use our recently developed CG model of ONAs in implicit solvent, and treat the conjugated polymer as a CG chain with purely repulsive Weeks-Chandler-Andersen interactions with all other species in the system. We find that 8-100-mer linear polymer conjugation destabilizes 8-mer ONA duplexes with weaker Watson-Crick hydrogen bonding (WC H-bonding) interactions at low duplex concentrations, while the same polymer conjugation has an insignificant impact on 8-mer ONA duplexes with stronger WC H-bonding. To ensure the configurational space is sampled properly in the CG-NVT simulations, we also perform CG well-tempered metadynamics simulations (CG-NVT-MetaD) and analyze the free energy landscape of ONA hybridization for a select few systems. We demonstrate that CG-NVT-MetaD simulation results are consistent with the CG-NVT simulations for the studied systems. To examine the limitations of coarse-graining in capturing ONA-polymer interactions, we perform atomistic parallel tempering metadynamics simulations at well-tempered ensemble (AA-MetaD) for a 4-mer DNA in explicit water with and without conjugation to 8-mer poly(ethylene glycol) (PEG). AA-MetaD simulations also show that, for a short DNA duplex at T = 300 K, a condition where the DNA duplex is unstable, conjugation with PEG further destabilizes DNA duplex. We conclude with a comparison of results from these three different types of simulations and discuss their limitations and strengths.

  5. Comparative analysis of the cold acclimation and freezing tolerance capacities of seven diploid Brachypodium distachyon accessions

    PubMed Central

    Colton-Gagnon, Katia; Ali-Benali, Mohamed Ali; Mayer, Boris F.; Dionne, Rachel; Bertrand, Annick; Do Carmo, Sonia; Charron, Jean-Benoit

    2014-01-01

    Background and Aims Cold is a major constraint for cereal cultivation under temperate climates. Winter-hardy plants interpret seasonal changes and can acquire the ability to resist sub-zero temperatures. This cold acclimation process is associated with physiological, biochemical and molecular alterations in cereals. Brachypodium distachyon is considered a powerful model system to study the response of temperate cereals to adverse environmental conditions. To date, little is known about the cold acclimation and freezing tolerance capacities of Brachypodium. The main objective of this study was to evaluate the cold hardiness of seven diploid Brachypodium accessions. Methods An integrated approach, involving monitoring of phenological indicators along with expression profiling of the major vernalization regulator VRN1 orthologue, was followed. In parallel, soluble sugars and proline contents were determined along with expression profiles of two COR genes in plants exposed to low temperatures. Finally, whole-plant freezing tests were performed to evaluate the freezing tolerance capacity of Brachypodium. Key Results Cold treatment accelerated the transition from the vegetative to the reproductive phase in all diploid Brachypodium accessions tested. In addition, low temperature exposure triggered the gradual accumulation of BradiVRN1 transcripts in all accessions tested. These accessions exhibited a clear cold acclimation response by progressively accumulating proline, sugars and COR gene transcripts. However, whole-plant freezing tests revealed that these seven diploid accessions only have a limited capacity to develop freezing tolerance when compared with winter varieties of temperate cereals such as wheat and barley. Furthermore, little difference in terms of survival was observed among the accessions tested despite their previous classification as either spring or winter genotypes. Conclusions This study is the first to characterize the freezing tolerance capacities of B. distachyon and provides strong evidence that some diploid accessions such as Bd21 have a facultative growth habit. PMID:24323247

  6. Evolution Is an Experiment: Assessing Parallelism in Crop Domestication and Experimental Evolution: (Nei Lecture, SMBE 2014, Puerto Rico).

    PubMed

    Gaut, Brandon S

    2015-07-01

    In this commentary, I make inferences about the level of repeatability and constraint in the evolutionary process, based on two sets of replicated experiments. The first experiment is crop domestication, which has been replicated across many different species. I focus on results of whole-genome scans for genes selected during domestication and ask whether genes are, in fact, selected in parallel across different domestication events. If genes are selected in parallel, it implies that the number of genetic solutions to the challenge of domestication is constrained. However, I find no evidence for parallel selection events either between species (maize vs. rice) or within species (two domestication events within beans). These results suggest that there are few constraints on genetic adaptation, but conclusions must be tempered by several complicating factors, particularly the lack of explicit design standards for selection screens. The second experiment involves the evolution of Escherichia coli to thermal stress. Unlike domestication, this highly replicated experiment detected a limited set of genes that appear prone to modification during adaptation to thermal stress. However, the number of potentially beneficial mutations within these genes is large, such that adaptation is constrained at the genic level but much less so at the nucleotide level. Based on these two experiments, I make the general conclusion that evolution is remarkably flexible, despite the presence of epistatic interactions that constrain evolutionary trajectories. I also posit that evolution is so rapid that we should establish a Speciation Prize, to be awarded to the first researcher who demonstrates speciation with a sexual organism in the laboratory. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. A parallel decision tree-based method for user authentication based on keystroke patterns.

    PubMed

    Sheng, Yong; Phoha, Vir V; Rovnyak, Steven M

    2005-08-01

    We propose a Monte Carlo approach to attain sufficient training data, a splitting method to improve effectiveness, and a system composed of parallel decision trees (DTs) to authenticate users based on keystroke patterns. For each user, approximately 19 times as much simulated data was generated to complement the 387 vectors of raw data. The training set, including raw and simulated data, is split into four subsets. For each subset, wavelet transforms are performed to obtain a total of eight training subsets for each user. Eight DTs are thus trained using the eight subsets. A parallel DT is constructed for each user, which contains all eight DTs with a criterion for its output that it authenticates the user if at least three DTs do so; otherwise it rejects the user. Training and testing data were collected from 43 users who typed the exact same string of length 37 nine consecutive times to provide data for training purposes. The users typed the same string at various times over a period from November through December 2002 to provide test data. The average false reject rate was 9.62% and the average false accept rate was 0.88%.

  8. Design and dosimetry of a few leaf electron collimator for energy modulated electron therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Yahya, Khalid; Verhaegen, Frank; Seuntjens, Jan

    2007-12-15

    Despite the capability of energy modulated electron therapy (EMET) to achieve highly conformal dose distributions in superficial targets it has not been widely implemented due to problems inherent in electron beam radiotherapy such as planning dosimetry accuracy, and verification as well as a lack of systems for automated delivery. In previous work we proposed a novel technique to deliver EMET using an automated 'few leaf electron collimator' (FLEC) that consists of four motor-driven leaves fit in a standard clinical electron beam applicator. Integrated with a Monte Carlo based optimization algorithm that utilizes patient-specific dose kernels, a treatment delivery was incorporatedmore » within the linear accelerator operation. The FLEC was envisioned to work as an accessory tool added to the clinical accelerator. In this article the design and construction of the FLEC prototype that match our compact design goals are presented. It is controlled using an in-house developed EMET controller. The structure of the software and the hardware characteristics of the EMET controller are demonstrated. Using a parallel plate ionization chamber, output measurements were obtained to validate the Monte Carlo calculations for a range of fields with different energies and sizes. Further verifications were also performed for comparing 1-D and 2-D dose distributions using energy independent radiochromic films. Comparisons between Monte Carlo calculations and measurements of complex intensity map deliveries show an overall agreement to within {+-}3%. This work confirms our design objectives of the FLEC that allow for automated delivery of EMET. Furthermore, the Monte Carlo dose calculation engine required for EMET planning was validated. The result supports the potential of the prototype FLEC for the planning and delivery of EMET.« less

  9. Joint inversion of 3-PG using eddy-covariance and inventory plot measurements in temperate-maritime conifer forests: Uncertainty in transient carbon-balance responses to climate change

    NASA Astrophysics Data System (ADS)

    Hember, R. A.; Kurz, W. A.; Coops, N. C.; Black, T. A.

    2010-12-01

    Temperate-maritime forests of coastal British Columbia store large amounts of carbon (C) in soil, detritus, and trees. To better understand the sensitivity of these C stocks to climate variability, simulations were conducted using a hybrid version of the model, Physiological Principles Predicting Growth (3-PG), combined with algorithms from the Carbon Budget Model of the Canadian Forest Sector - version 3 (CBM-CFS3) to account for full ecosystem C dynamics. The model was optimized based on a combination of monthly CO2 and H2O flux measurements derived from three eddy-covariance systems and multi-annual stemwood growth (Gsw) and mortality (Msw) derived from 1300 permanent sample plots by means of Markov chain Monte Carlo sampling. The calibrated model serves as an unbiased estimator of stemwood C with enhanced precision over that of strictly-empirical models, minimized reliance on local prescriptions, and the flexibility to study impacts of environmental change on regional C stocks. We report the contribution of each dataset in identifying key physiological parameters and the posterior uncertainty in predictions of net ecosystem production (NEP). The calibrated model was used to spin up pre-industrial C pools and estimate the sensitivity of regional net carbon balance to a gradient of temperature changes, λ=ΔC/ΔT, during three 62-year harvest rotations, spanning 1949-2135. Simulations suggest that regional net primary production, tree mortality, and heterotrophic respiration all began increasing, while NEP began decreasing in response to warming following the 1976 shift in northeast-Pacific climate. We quantified the uncertainty of λ and how it was mediated by initial dead C, tree mortality, precipitation change, and the time horizon in which it was calculated.

  10. 40 CFR 426.60 - Applicability; description of the automotive glass tempering subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... automotive glass tempering subcategory. 426.60 Section 426.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Automotive Glass Tempering Subcategory § 426.60 Applicability; description of the automotive glass tempering...

  11. 40 CFR 426.60 - Applicability; description of the automotive glass tempering subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... automotive glass tempering subcategory. 426.60 Section 426.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Automotive Glass Tempering Subcategory § 426.60 Applicability; description of the automotive glass tempering...

  12. Evaporative water loss, relative water economy and evaporative partitioning of a heterothermic marsupial, the monito del monte (Dromiciops gliroides).

    PubMed

    Withers, Philip C; Cooper, Christine E; Nespolo, Roberto F

    2012-08-15

    We examine here evaporative water loss, economy and partitioning at ambient temperatures from 14 to 33°C for the monito del monte (Dromiciops gliroides), a microbiotheriid marsupial found only in temperate rainforests of Chile. The monito's standard evaporative water loss (2.58 mg g(-1) h(-1) at 30°C) was typical for a marsupial of its body mass and phylogenetic position. Evaporative water loss was independent of air temperature below thermoneutrality, but enhanced evaporative water loss and hyperthermia were the primary thermal responses above the thermoneutral zone. Non-invasive partitioning of total evaporative water loss indicated that respiratory loss accounted for 59-77% of the total, with no change in respiratory loss with ambient temperature, but a small change in cutaneous loss below thermoneutrality and an increase in cutaneous loss in and above thermoneutrality. Relative water economy (metabolic water production/evaporative water loss) increased at low ambient temperatures, with a point of relative water economy of 15.4°C. Thermolability had little effect on relative water economy, but conferred substantial energy savings at low ambient temperatures. Torpor reduced total evaporative water loss to as little as 21% of normothermic values, but relative water economy during torpor was poor even at low ambient temperatures because of the relatively greater reduction in metabolic water production than in evaporative water loss. The poor water economy of the monito during torpor suggests that negative water balance may explain why hibernators periodically arouse to normothermia, to obtain water by drinking or via an improved water economy.

  13. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  14. ORAC: a molecular dynamics simulation program to explore free energy surfaces in biomolecular systems at the atomistic level.

    PubMed

    Marsili, Simone; Signorini, Giorgio Federico; Chelli, Riccardo; Marchi, Massimo; Procacci, Piero

    2010-04-15

    We present the new release of the ORAC engine (Procacci et al., Comput Chem 1997, 18, 1834), a FORTRAN suite to simulate complex biosystems at the atomistic level. The previous release of the ORAC code included multiple time steps integration, smooth particle mesh Ewald method, constant pressure and constant temperature simulations. The present release has been supplemented with the most advanced techniques for enhanced sampling in atomistic systems including replica exchange with solute tempering, metadynamics and steered molecular dynamics. All these computational technologies have been implemented for parallel architectures using the standard MPI communication protocol. ORAC is an open-source program distributed free of charge under the GNU general public license (GPL) at http://www.chim.unifi.it/orac. 2009 Wiley Periodicals, Inc.

  15. Global warming, elevational range shifts, and lowland biotic attrition in the wet tropics.

    PubMed

    Colwell, Robert K; Brehm, Gunnar; Cardelús, Catherine L; Gilman, Alex C; Longino, John T

    2008-10-10

    Many studies suggest that global warming is driving species ranges poleward and toward higher elevations at temperate latitudes, but evidence for range shifts is scarce for the tropics, where the shallow latitudinal temperature gradient makes upslope shifts more likely than poleward shifts. Based on new data for plants and insects on an elevational transect in Costa Rica, we assess the potential for lowland biotic attrition, range-shift gaps, and mountaintop extinctions under projected warming. We conclude that tropical lowland biotas may face a level of net lowland biotic attrition without parallel at higher latitudes (where range shifts may be compensated for by species from lower latitudes) and that a high proportion of tropical species soon faces gaps between current and projected elevational ranges.

  16. A Hardware-Accelerated Quantum Monte Carlo framework (HAQMC) for N-body systems

    NASA Astrophysics Data System (ADS)

    Gothandaraman, Akila; Peterson, Gregory D.; Warren, G. Lee; Hinde, Robert J.; Harrison, Robert J.

    2009-12-01

    Interest in the study of structural and energetic properties of highly quantum clusters, such as inert gas clusters has motivated the development of a hardware-accelerated framework for Quantum Monte Carlo simulations. In the Quantum Monte Carlo method, the properties of a system of atoms, such as the ground-state energies, are averaged over a number of iterations. Our framework is aimed at accelerating the computations in each iteration of the QMC application by offloading the calculation of properties, namely energy and trial wave function, onto reconfigurable hardware. This gives a user the capability to run simulations for a large number of iterations, thereby reducing the statistical uncertainty in the properties, and for larger clusters. This framework is designed to run on the Cray XD1 high performance reconfigurable computing platform, which exploits the coarse-grained parallelism of the processor along with the fine-grained parallelism of the reconfigurable computing devices available in the form of field-programmable gate arrays. In this paper, we illustrate the functioning of the framework, which can be used to calculate the energies for a model cluster of helium atoms. In addition, we present the capabilities of the framework that allow the user to vary the chemical identities of the simulated atoms. Program summaryProgram title: Hardware Accelerated Quantum Monte Carlo (HAQMC) Catalogue identifier: AEEP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 691 537 No. of bytes in distributed program, including test data, etc.: 5 031 226 Distribution format: tar.gz Programming language: C/C++ for the QMC application, VHDL and Xilinx 8.1 ISE/EDK tools for FPGA design and development Computer: Cray XD1 consisting of a dual-core, dualprocessor AMD Opteron 2.2 GHz with a Xilinx Virtex-4 (V4LX160) or Xilinx Virtex-II Pro (XC2VP50) FPGA per node. We use the compute node with the Xilinx Virtex-4 FPGA Operating system: Red Hat Enterprise Linux OS Has the code been vectorised or parallelized?: Yes Classification: 6.1 Nature of problem: Quantum Monte Carlo is a practical method to solve the Schrödinger equation for large many-body systems and obtain the ground-state properties of such systems. This method involves the sampling of a number of configurations of atoms and averaging the properties of the configurations over a number of iterations. We are interested in applying the QMC method to obtain the energy and other properties of highly quantum clusters, such as inert gas clusters. Solution method: The proposed framework provides a combined hardware-software approach, in which the QMC simulation is performed on the host processor, with the computationally intensive functions such as energy and trial wave function computations mapped onto the field-programmable gate array (FPGA) logic device attached as a co-processor to the host processor. We perform the QMC simulation for a number of iterations as in the case of our original software QMC approach, to reduce the statistical uncertainty of the results. However, our proposed HAQMC framework accelerates each iteration of the simulation, by significantly reducing the time taken to calculate the ground-state properties of the configurations of atoms, thereby accelerating the overall QMC simulation. We provide a generic interpolation framework that can be extended to study a variety of pure and doped atomic clusters, irrespective of the chemical identities of the atoms. For the FPGA implementation of the properties, we use a two-region approach for accurately computing the properties over the entire domain, employ deep pipelines and fixed-point for all our calculations guaranteeing the accuracy required for our simulation.

  17. Engine Response to Distorted Inflow Conditions: Conference Proceedings of the Propulsion and Energetics Specialists’ Meeting (68th) Held in Munich, Germany on 8-9 September 1986.

    DTIC Science & Technology

    1987-03-01

    DISTORTION UNSTEADY INLET DISTORTION CHARACTERISTICS WITH THE B-1 B by C.J.MacMiler and W.R.Haagenson 16 DETERMINATION EXPERIMENTALE DES LOIS DE TRANSFERT DE...compresseurs. Dens ce domaine. los trois principaux objectifs du motoriste sont los suivants (a) - d6mont rer avant los essais en vol. la compatibilitA...ann~es sur la conception des moteurs deavions. 2 - ETHODES DE CALCUL 2.1. METNODES DES SECTEURS DE COMPRESSEUR EN PARALLELE 2.1.1. Description do la m

  18. Parameter Estimation and Model Validation of Nonlinear Dynamical Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abarbanel, Henry; Gill, Philip

    In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.

  19. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  20. The effect of tempering treatment on mechanical properties and microstructure for armored lateritic steel

    NASA Astrophysics Data System (ADS)

    Herbirowo, Satrio; Adjiantoro, Bintang; Romijarso, Toni Bambang; Pramono, Andika Widya

    2018-05-01

    High demand of armor material impacts on the use of lateritic steel as alternative armored material, therefore an increase of its mechanical properties is necessary. Quenching and tempering process can be used to increase the mechanical properties of the lateritic steel. The variables that used in this research are variation in media quench (water, oil, and air) and variation in tempering temperatures (0, 100, and 200 °C). The results show that specimen with water quenchant tempered at 100 °C have the highest average on hardness (59.1 HRC) and tensile strength. Specimen with oil quenchant tempered at 100 °C has the highest impact toughness (52 J). Secondary hardening and tempered martensite embrittlement phenomenon is found in some specimens where its hardness increased and its impact toughness decreased after the tempering process. Microstructures which formed in this process are martensite and retained austenite phase with fracture types are brittle.

  1. An investigation on high temperature fatigue properties of tempered nuclear-grade deposited weld metals

    NASA Astrophysics Data System (ADS)

    Cao, X. Y.; Zhu, P.; Yong, Q.; Liu, T. G.; Lu, Y. H.; Zhao, J. C.; Jiang, Y.; Shoji, T.

    2018-02-01

    Effect of tempering on low cycle fatigue (LCF) behaviors of nuclear-grade deposited weld metal was investigated, and The LCF tests were performed at 350 °C with strain amplitudes ranging from 0.2% to 0.6%. The results showed that at a low strain amplitude, deposited weld metal tempered for 1 h had a high fatigue resistance due to high yield strength, while at a high strain amplitude, the one tempered for 24 h had a superior fatigue resistance due to high ductility. Deposited weld metal tempered for 1 h exhibited cyclic hardening at the tested strain amplitudes. Deposited weld metal tempered for 24 h exhibited cyclic hardening at a low strain amplitude but cyclic softening at a high strain amplitude. Existence and decomposition of martensite-austenite (M-A) islands as well as dislocations activities contributed to fatigue property discrepancy among the two tempered deposited weld metal.

  2. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less

  3. Tempering characteristics of a vanadium containing dual phase steel

    NASA Astrophysics Data System (ADS)

    Rashid, M. S.; Rao, B. V. N.

    1982-10-01

    Dual phase steels are characterized by a microstructure consisting of ferrite, martensite, retained austenite, and/or lower bainite. This microstructure can be altered by tempering with accompanying changes in mechanical properties. This paper examines such changes produced in a vanadium bearing dual phase steel upon tempering below 500 °C. The steel mechanical properties were minimally affected on tempering below 200 °C; however, a simultaneous reduction in uniform elongation and tensile strength occurred upon tempering above 400 °C. The large amount of retained austenite (≅10 vol pct) observed in the as-received steel was found to be essentially stable to tempering below 300 °C. On tempering above 400 °C, most of the retained austenite decomposed to either upper bainite (at 400 °C) or a mixture of upper bainite and ferrite-carbide aggregate formed by an interphase precipitation mechanism (at 500 °C). In addition, tempering at 400 °C led to fine precipitation in the retained ferrite. The observed mechanical properties were correlated with these microstructural changes. It was concluded that the observed decrease in uniform elongation upon tempering above 400 °C is primarily the consequence of the decomposition of retained austenite and the resulting loss of transformation induced plasticity (TRIP) as a contributing mechanism to the strain hardening of the steel.

  4. Temperate macroalgae impacts tropical fish recruitment at forefronts of range expansion

    NASA Astrophysics Data System (ADS)

    Beck, H. J.; Feary, D. A.; Nakamura, Y.; Booth, D. J.

    2017-06-01

    Warming waters and changing ocean currents are increasing the supply of tropical fish larvae to temperature regions where they are exposed to novel habitats, namely temperate macroalgae and barren reefs. Here, we use underwater surveys on the temperate reefs of south-eastern (SE) Australia and western Japan ( 33.5°N and S, respectively) to investigate how temperate macroalgal and non-macroalgal habitats influence recruitment success of a range of tropical fishes. We show that temperate macroalgae strongly affected recruitment of many tropical fish species in both regions and across three recruitment seasons in SE Australia. Densities and richness of recruiting tropical fishes, primarily planktivores and herbivores, were over seven times greater in non-macroalgal than macroalgal reef habitat. Species and trophic diversity ( K-dominance) were also greater in non-macroalgal habitat. Temperate macroalgal cover was a stronger predictor of tropical fish assemblages than temperate fish assemblages, reef rugosities or wave exposure. Tropical fish richness, diversity and density were greater on barren reef than on reef dominated by turfing algae. One common species, the neon damselfish ( Pomacentrus coelestis), chose non-macroalgal habitat over temperate macroalgae for settlement in an aquarium experiment. This study highlights that temperate macroalgae may partly account for spatial variation in recruitment success of many tropical fishes into higher latitudes. Hence, habitat composition of temperate reefs may need to be considered to accurately predict the geographic responses of many tropical fishes to climate change.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Gopa, E-mail: gopa_mjs@igcar.gov.in; Das, C.R.; Albert, S.K.

    Martensitic stainless steels find extensive applications due to their optimum combination of strength, hardness and wear-resistance in tempered condition. However, this class of steels is susceptible to embrittlement during tempering if it is carried out in a specific temperature range resulting in significant reduction in toughness. Embrittlement of as-normalised AISI 410 martensitic stainless steel, subjected to tempering treatment in the temperature range of 673–923 K was studied using Charpy impact tests followed by metallurgical investigations using field emission scanning electron and transmission electron microscopes. Carbides precipitated during tempering were extracted by electrochemical dissolution of the matrix and identified by X-raymore » diffraction. Studies indicated that temper embrittlement is highest when the steel is tempered at 823 K. Mostly iron rich carbides are present in the steel subjected to tempering at low temperatures of around 723 K, whereas chromium rich carbides (M{sub 23}C{sub 6}) dominate precipitation at high temperature tempering. The range 773–823 K is the transition temperature range for the precipitates, with both Fe{sub 2}C and M{sub 23}C{sub 6} types of carbides coexisting in the material. The nucleation of Fe{sub 2}C within the martensite lath, during low temperature tempering, has a definite role in the embrittlement of this steel. Embrittlement is not observed at high temperature tempering because of precipitation of M{sub 23}C{sub 6} carbides, instead of Fe{sub 2}C, preferentially along the lath and prior austenite boundaries. Segregation of S and P, which is widely reported as one of the causes for temper embrittlement, could not be detected in the material even through Auger electron spectroscopy studies. - Highlights: • Tempering behaviour of AISI 410 steel is studied within 673–923 K temperature range. • Temperature regime of maximum embrittlement is identified as 773–848 K. • Results show that type of carbide precipitation varies with temperature of tempering. • Mostly iron rich Fe{sub 2}C carbides are present in the embrittlement temperature range. • With the precipitation of M{sub 23}C{sub 6} carbides, recovery from the embrittlement begins.« less

  6. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  7. Event Generators for Simulating Heavy Ion Interactions of Interest in Evaluating Risks in Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Wilson, Thomas L.; Pinsky, Lawrence; Andersen, Victor; Empl, Anton; Lee, Kerry; Smirmov, Georgi; Zapp, Neal; Ferrari, Alfredo; Tsoulou, Katerina; Roesler, Stefan; hide

    2005-01-01

    Simulating the Space Radiation environment with Monte Carlo Codes, such as FLUKA, requires the ability to model the interactions of heavy ions as they penetrate spacecraft and crew member's bodies. Monte-Carlo-type transport codes use total interaction cross sections to determine probabilistically when a particular type of interaction has occurred. Then, at that point, a distinct event generator is employed to determine separately the results of that interaction. The space radiation environment contains a full spectrum of radiation types, including relativistic nuclei, which are the most important component for the evaluation of crew doses. Interactions between incident protons with target nuclei in the spacecraft materials and crew member's bodies are well understood. However, the situation is substantially less comfortable for incident heavier nuclei (heavy ions). We have been engaged in developing several related heavy ion interaction models based on a Quantum Molecular Dynamics-type approach for energies up through about 5 GeV per nucleon (GeV/A) as part of a NASA Consortium that includes a parallel program of cross section measurements to guide and verify this code development.

  8. Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.

    PubMed

    Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut

    2007-04-20

    We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.

  9. Monte Carlo studies of thermalization of electron-hole pairs in spin-polarized degenerate electron gas in monolayer graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2018-02-01

    Monte Carlo method is applied to the study of relaxation of excited electron-hole (e-h) pairs in graphene. The presence of background of spin-polarized electrons, with high density imposing degeneracy conditions, is assumed. To such system, a number of e-h pairs with spin polarization parallel or antiparallel to the background is injected. Two stages of relaxation: thermalization and cooling are clearly distinguished when average particles energy < E> and its standard deviation σ _E are examined. At the very beginning of thermalization phase, holes loose energy to electrons, and after this process is substantially completed, particle distributions reorganize to take a Fermi-Dirac shape. To describe the evolution of < E > and σ _E during thermalization, we define characteristic times τ _ {th} and values at the end of thermalization E_ {th} and σ _ {th}. The dependence of these parameters on various conditions, such as temperature and background density, is presented. It is shown that among the considered parameters, only the standard deviation of electrons energy allows to distinguish between different cases of relative spin polarizations of background and excited electrons.

  10. Monte Carlo investigation of thrust imbalance of solid rocket motor pairs

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.

    1976-01-01

    The Monte Carlo method of statistical analysis is used to investigate the theoretical thrust imbalance of pairs of solid rocket motors (SRMs) firing in parallel. Sets of the significant variables are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs using a simplified, but comprehensive, model of the internal ballistics. The treatment of burning surface geometry allows for the variations in the ovality and alignment of the motor case and mandrel as well as those arising from differences in the basic size dimensions and propellant properties. The analysis is used to predict the thrust-time characteristics of 130 randomly selected pairs of Titan IIIC SRMs. A statistical comparison of the results with test data for 20 pairs shows the theory underpredicts the standard deviation in maximum thrust imbalance by 20% with variability in burning times matched within 2%. The range in thrust imbalance of Space Shuttle type SRM pairs is also estimated using applicable tolerances and variabilities and a correction factor based on the Titan IIIC analysis.

  11. Bond Order Correlations in the 2D Hubbard Model

    NASA Astrophysics Data System (ADS)

    Moore, Conrad; Abu Asal, Sameer; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    We use the dynamical cluster approximation to study the bond correlations in the Hubbard model with next nearest neighbor (nnn) hopping to explore the region of the phase diagram where the Fermi liquid phase is separated from the pseudogap phase by the Lifshitz line at zero temperature. We implement the Hirsch-Fye cluster solver that has the advantage of providing direct access to the computation of the bond operators via the decoupling field. In the pseudogap phase, the parallel bond order susceptibility is shown to persist at zero temperature while it vanishes for the Fermi liquid phase which allows the shape of the Lifshitz line to be mapped as a function of filling and nnn hopping. Our cluster solver implements NVIDIA's CUDA language to accelerate the linear algebra of the Quantum Monte Carlo to help alleviate the sign problem by allowing for more Monte Carlo updates to be performed in a reasonable amount of computation time. Work supported by the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  12. Development of accelerated Raman and fluorescent Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Dumont, Alexander P.; Patil, Chetan

    2018-02-01

    Monte Carlo (MC) modeling of photon propagation in turbid media is an essential tool for understanding optical interactions between light and tissue. Insight gathered from outputs of MC models assists in mapping between detected optical signals and bulk tissue optical properties, and as such, has proven useful for inverse calculations of tissue composition and optimization of the design of optical probes. MC models of Raman scattering have previously been implemented without consideration to background autofluorescence, despite its presence in raw measurements. Modeling both Raman and fluorescence profiles at high spectral resolution requires a significant increase in computation, but is more appropriate for investigating issues such as detection limits. We present a new Raman Fluorescence MC model developed atop an existing GPU parallelized MC framework that can run more than 300x times faster than CPU methods. The robust acceleration allows for the efficient production of both Raman and fluorescence outputs from the MC model. In addition, this model can handle arbitrary sample morphologies of excitation and collection geometries to more appropriately mimic experimental settings. We will present the model framework and initial results.

  13. An update on the analysis of the Princeton 19Ne beta asymmetry measurement

    NASA Astrophysics Data System (ADS)

    Combs, Dustin; Calaprice, Frank; Jones, Gordon; Pattie, Robert; Young, Albert

    2013-10-01

    We report on the progress of a new analysis of the 1994 19Ne beta asymmetry measurement conducted at Princeton University. In this experiment, a beam of 19Ne atoms were polarized with a Stern-Gerlach magnet and then entered a thin-walled mylar cell through a slit fabricated from a piece of micro channel plate. A pair of Si(Li) detectors at either end of the apparatus were aligned with the direction of spin polarization (one parallel and one anti-parallel to the spin of the 19Ne) and detected positrons from the decays. The difference in the rate in the two detectors was used to calculate the asymmetry. A new analysis procedure has been undertaken using the Monte Carlo package PENELOPE with the goal of determining the systematic uncertainty due to positrons scattering from the face of the detectors causing the incorrect reconstruction of the initial direction of the positron momentum. This was a leading cause of systematic uncertainty in the experiment in 1994.

  14. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  15. SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran

    NASA Astrophysics Data System (ADS)

    Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.

    2008-03-01

    We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.

  16. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  17. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations

    PubMed Central

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775

  18. Martian plate tectonics

    NASA Astrophysics Data System (ADS)

    Sleep, N. H.

    1994-03-01

    The northern lowlands of Mars have been produced by plate tectonics. Preexisting old thick highland crust was subducted, while seafloor spreading produced thin lowland crust during late Noachian and Early Hesperian time. In the preferred reconstruction, a breakup margin extended north of Cimmeria Terra between Daedalia Planum and Isidis Planitia where the highland-lowland transition is relatively simple. South dipping subduction occured beneath Arabia Terra and east dipping subduction beneath Tharsis Montes and Tempe Terra. Lineations associated with Gordii Dorsum are attributed to ridge-parallel structures, while Phelegra Montes and Scandia Colles are interpreted as transfer-parallel structures or ridge-fault-fault triple junction tracks. Other than for these few features, there is little topographic roughness in the lowlands. Seafloor spreading, if it occurred, must have been relatively rapid. Quantitative estimates of spreading rate are obtained by considering the physics of seafloor spreading in the lower (approx. 0.4 g) gravity of Mars, the absence of vertical scarps from age differences across fracture zones, and the smooth axial topography. Crustal thickness at a given potential temperature in the mantle source region scales inversely with gravity. Thus, the velocity of the rough-smooth transition for axial topography also scales inversely with gravity. Plate reorganizations where young crust becomes difficult to subduct are another constraint on spreading age. Plate tectonics, if it occurred, dominated the thermal and stress history of the planet. A geochemical implication is that the lower gravity of Mars allows deeper hydrothermal circulation through cracks and hence more hydration of oceanic crust so that more water is easily subducted than on the Earth. Age and structural relationships from photogeology as well as median wavelength gravity anomalies across the now dead breakup and subduction margins are the data most likely to test and modify hypotheses about Mars plate tectonics.

  19. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. Moremore » recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.« less

  20. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations.

    PubMed

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.

  1. The tropicalization of temperate marine ecosystems: climate-mediated changes in herbivory and community phase shifts

    PubMed Central

    Vergés, Adriana; Steinberg, Peter D.; Hay, Mark E.; Poore, Alistair G. B.; Campbell, Alexandra H.; Ballesteros, Enric; Heck, Kenneth L.; Booth, David J.; Coleman, Melinda A.; Feary, David A.; Figueira, Will; Langlois, Tim; Marzinelli, Ezequiel M.; Mizerek, Toni; Mumby, Peter J.; Nakamura, Yohei; Roughan, Moninya; van Sebille, Erik; Gupta, Alex Sen; Smale, Dan A.; Tomas, Fiona; Wernberg, Thomas; Wilson, Shaun K.

    2014-01-01

    Climate-driven changes in biotic interactions can profoundly alter ecological communities, particularly when they impact foundation species. In marine systems, changes in herbivory and the consequent loss of dominant habitat forming species can result in dramatic community phase shifts, such as from coral to macroalgal dominance when tropical fish herbivory decreases, and from algal forests to ‘barrens’ when temperate urchin grazing increases. Here, we propose a novel phase-shift away from macroalgal dominance caused by tropical herbivores extending their range into temperate regions. We argue that this phase shift is facilitated by poleward-flowing boundary currents that are creating ocean warming hotspots around the globe, enabling the range expansion of tropical species and increasing their grazing rates in temperate areas. Overgrazing of temperate macroalgae by tropical herbivorous fishes has already occurred in Japan and the Mediterranean. Emerging evidence suggests similar phenomena are occurring in other temperate regions, with increasing occurrence of tropical fishes on temperate reefs. PMID:25009065

  2. Field characterization of elastic properties across a fault zone reactivated by fluid injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeanne, Pierre; Guglielmi, Yves; Rutqvist, Jonny

    In this paper, we studied the elastic properties of a fault zone intersecting the Opalinus Clay formation at 300 m depth in the Mont Terri Underground Research Laboratory (Switzerland). Four controlled water injection experiments were performed in borehole straddle intervals set at successive locations across the fault zone. A three-component displacement sensor, which allowed capturing the borehole wall movements during injection, was used to estimate the elastic properties of representative locations across the fault zone, from the host rock to the damage zone to the fault core. Young's moduli were estimated by both an analytical approach and numerical finite differencemore » modeling. Results show a decrease in Young's modulus from the host rock to the damage zone by a factor of 5 and from the damage zone to the fault core by a factor of 2. In the host rock, our results are in reasonable agreement with laboratory data showing a strong elastic anisotropy characterized by the direction of the plane of isotropy parallel to the laminar structure of the shale formation. In the fault zone, strong rotations of the direction of anisotropy can be observed. Finally, the plane of isotropy can be oriented either parallel to bedding (when few discontinuities are present), parallel to the direction of the main fracture family intersecting the zone, and possibly oriented parallel or perpendicular to the fractures critically oriented for shear reactivation (when repeated past rupture along this plane has created a zone).« less

  3. Field characterization of elastic properties across a fault zone reactivated by fluid injection

    DOE PAGES

    Jeanne, Pierre; Guglielmi, Yves; Rutqvist, Jonny; ...

    2017-08-12

    In this paper, we studied the elastic properties of a fault zone intersecting the Opalinus Clay formation at 300 m depth in the Mont Terri Underground Research Laboratory (Switzerland). Four controlled water injection experiments were performed in borehole straddle intervals set at successive locations across the fault zone. A three-component displacement sensor, which allowed capturing the borehole wall movements during injection, was used to estimate the elastic properties of representative locations across the fault zone, from the host rock to the damage zone to the fault core. Young's moduli were estimated by both an analytical approach and numerical finite differencemore » modeling. Results show a decrease in Young's modulus from the host rock to the damage zone by a factor of 5 and from the damage zone to the fault core by a factor of 2. In the host rock, our results are in reasonable agreement with laboratory data showing a strong elastic anisotropy characterized by the direction of the plane of isotropy parallel to the laminar structure of the shale formation. In the fault zone, strong rotations of the direction of anisotropy can be observed. Finally, the plane of isotropy can be oriented either parallel to bedding (when few discontinuities are present), parallel to the direction of the main fracture family intersecting the zone, and possibly oriented parallel or perpendicular to the fractures critically oriented for shear reactivation (when repeated past rupture along this plane has created a zone).« less

  4. A new variable parallel holes collimator for scintigraphic device with validation method based on Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

    2010-09-01

    The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

  5. Monte Carlo Radiative Transfer Modeling of Lightning Observed in Galileo Images of Jupiter

    NASA Technical Reports Server (NTRS)

    Dyudine, U. A.; Ingersoll, Andrew P.

    2002-01-01

    We study lightning on Jupiter and the clouds illuminated by the lightning using images taken by the Galileo orbiter. The Galileo images have a resolution of 25 km/pixel and axe able to resolve the shape of the single lightning spots in the images, which have full widths at half the maximum intensity in the range of 90-160 km. We compare the measured lightning flash images with simulated images produced by our ED Monte Carlo light-scattering model. The model calculates Monte Carlo scattering of photons in a ED opacity distribution. During each scattering event, light is partially absorbed. The new direction of the photon after scattering is chosen according to a Henyey-Greenstein phase function. An image from each direction is produced by accumulating photons emerging from the cloud in a small range (bins) of emission angles. Lightning bolts are modeled either as points or vertical lines. Our results suggest that some of the observed scattering patterns axe produced in a 3-D cloud rather than in a plane-parallel cloud layer. Lightning is estimated to occur at least as deep as the bottom of the expected water cloud. For the six cases studied, we find that the clouds above the lightning are optically thick (tau > 5). Jovian flashes are more regular and circular than the largest terrestrial flashes observed from space. On Jupiter there is nothing equivalent to the 30-40-km horizontal flashes which axe seen on Earth.

  6. Experimental and Monte Carlo studies of fluence corrections for graphite calorimetry in low- and high-energy clinical proton beams.

    PubMed

    Lourenço, Ana; Thomas, Russell; Bouchard, Hugo; Kacperek, Andrzej; Vondracek, Vladimir; Royle, Gary; Palmans, Hugo

    2016-07-01

    The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the fluka code [A. Ferrari et al., "fluka: A multi-particle transport code," in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., "The fluka Code: Developments and challenges for high energy and medical applications," Nucl. Data Sheets 120, 211-214 (2014)], to partial fluence corrections measured experimentally. A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary particle fluence. A correction factor, F(d), has been established to relate fluence corrections defined theoretically to partial fluence corrections derived experimentally. The findings presented here are also relevant to water and tissue-equivalent-plastic materials given their carbon content.

  7. Properties of 5052 Aluminum For Use as Honeycomb Core in Manned Spaceflight

    NASA Technical Reports Server (NTRS)

    Lerch, Bradley A.

    2018-01-01

    This work explains that the properties of Al 5052 material used commonly for honeycomb cores in sandwich panels are highly dependent on the tempering condition. It has not been common to specify the temper when ordering HC material nor is it common for the supplier to state what the temper is. For aerospace uses, a temper of H38 or H39 is probably recommended. This temper should be stated in the bill of material and should be verified upon receipt of the core. To this end some properties provided herein can aid as benchmark values.

  8. Free-energy analyses of a proton transfer reaction by simulated-tempering umbrella sampling and first-principles molecular dynamics simulations.

    PubMed

    Mori, Yoshiharu; Okamoto, Yuko

    2013-02-01

    A simulated tempering method, which is referred to as simulated-tempering umbrella sampling, for calculating the free energy of chemical reactions is proposed. First principles molecular dynamics simulations with this simulated tempering were performed to study the intramolecular proton transfer reaction of malonaldehyde in an aqueous solution. Conformational sampling in reaction coordinate space can be easily enhanced with this method, and the free energy along a reaction coordinate can be calculated accurately. Moreover, the simulated-tempering umbrella sampling provides trajectory data more efficiently than the conventional umbrella sampling method.

  9. Net degradation of methyl mercury in alder swamps.

    PubMed

    Kronberg, Rose-Marie; Tjerngren, Ida; Drott, Andreas; Björn, Erik; Skyllberg, Ulf

    2012-12-18

    Wetlands are generally considered to be sources of methyl mercury (MeHg) in northern temperate landscapes. However, a recent input-output mass balance study during 2007-2010 revealed a black alder (Alnus glutinosa) swamp in southern Sweden to be a consistent and significant MeHg sink, with a 30-60% loss of MeHg. The soil pool of MeHg varied substantially between years, but it always decreased with distance from the stream inlet to the swamp. The soil MeHg pool was significantly lower in the downstream as compared to the upstream half of the swamp (0.66 and 1.34 ng MeHg g⁻¹ SOC⁻¹ annual average⁻¹, respectively, one-way ANOVA, p = 0.0006). In 2008 a significant decrease of %MeHg in soil was paralleled by a significant increase in potential demethylation rate constant (k(d), p < 0.02 and p < 0.004, respectively). In contrast, the potential methylation rate constant (k(m)) was unrelated to distance (p = 0.3). Our results suggest that MeHg was net degraded in the Alnus swamp, and that it had a rapid and dynamic internal turnover of MeHg. Snapshot stream input-output measurements at eight additional Alnus glutinosa swamps in southern Sweden indicate that Alnus swamps in general are sinks for MeHg. Our findings have implications for forestry practices and landscape planning, and suggest that restored or preserved Alnus swamps may be used to mitigate MeHg produced in northern temperate landscapes.

  10. Accelerating Fibre Orientation Estimation from Diffusion Weighted Magnetic Resonance Imaging Using GPUs

    PubMed Central

    Hernández, Moisés; Guerrero, Ginés D.; Cecilia, José M.; García, José M.; Inuggi, Alberto; Jbabdi, Saad; Behrens, Timothy E. J.; Sotiropoulos, Stamatios N.

    2013-01-01

    With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation. PMID:23658616

  11. The difference between temperate and tropical saltwater species' acute sensitivity to chemicals is relatively small.

    PubMed

    Wang, Zhen; Kwok, Kevin W H; Lui, Gilbert C S; Zhou, Guang-Jie; Lee, Jae-Seong; Lam, Michael H W; Leung, Kenneth M Y

    2014-06-01

    Due to a lack of saltwater toxicity data in tropical regions, toxicity data generated from temperate or cold water species endemic to North America and Europe are often adopted to derive water quality guidelines (WQG) for protecting tropical saltwater species. If chemical toxicity to most saltwater organisms increases with water temperature, the use of temperate species data and associated WQG may result in under-protection to tropical species. Given the differences in species composition and environmental attributes between tropical and temperate saltwater ecosystems, there are conceivable uncertainties in such 'temperate-to-tropic' extrapolations. This study aims to compare temperate and tropical saltwater species' acute sensitivity to 11 chemicals through a comprehensive meta-analysis, by comparing species sensitivity distributions (SSDs) between the two groups. A 10 percentile hazardous concentration (HC10) is derived from each SSD, and then a temperate-to-tropic HC10 ratio is computed for each chemical. Our results demonstrate that temperate and tropical saltwater species display significantly different sensitivity towards all test chemicals except cadmium, although such differences are small with the HC10 ratios ranging from 0.094 (un-ionised ammonia) to 2.190 (pentachlorophenol) only. Temperate species are more sensitive to un-ionised ammonia, chromium, lead, nickel and tributyltin, whereas tropical species are more sensitive to copper, mercury, zinc, phenol and pentachlorophenol. Through comparison of a limited number of taxon-specific SSDs, we observe that there is a general decline in chemical sensitivity from algae to crustaceans, molluscs and then fishes. Following a statistical analysis of the results, we recommend an extrapolation factor of two for deriving tropical WQG from temperate information. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Assessing Earthquake-Induced Tree Mortality in Temperate Forest Ecosystems: A Case Study from Wenchuan, China

    DOE PAGES

    Zeng, Hongcheng; Lu, Tao; Jenkins, Hillary; ...

    2016-03-17

    Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less

  13. Assessing Earthquake-Induced Tree Mortality in Temperate Forest Ecosystems: A Case Study from Wenchuan, China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Hongcheng; Lu, Tao; Jenkins, Hillary

    Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less

  14. Forest ecosystems of temperate climatic regions: from ancient use to climate change.

    PubMed

    Gilliam, Frank S

    2016-12-01

    871 I. 871 II. 874 III. 875 IV. 878 V. 882 884 References 884 SUMMARY: Humans have long utilized resources from all forest biomes, but the most indelible anthropogenic signature has been the expanse of human populations in temperate forests. The purpose of this review is to bring into focus the diverse forests of the temperate region of the biosphere, including those of hardwood, conifer and mixed dominance, with a particular emphasis on crucial challenges for the future of these forested areas. Implicit in the term 'temperate' is that the predominant climate of these forest regions has distinct cyclic, seasonal changes involving periods of growth and dormancy. The specific temporal patterns of seasonal change, however, display an impressive variability among temperate forest regions. In addition to the more apparent current anthropogenic disturbances of temperate forests, such as forest management and conversion to agriculture, human alteration of temperate forests is actually an ancient phenomenon, going as far back as 7000 yr before present (bp). As deep-seated as these past legacies are for temperate forests, all current and future perturbations, including timber harvesting, excess nitrogen deposition, altered species' phenologies, and increasing frequency of drought and fire, must be viewed through the lens of climate change. © 2016 The Author. New Phytologist © 2016 New Phytologist Trust.

  15. Benthic Crustacea from tropical and temperate reef locations: differences in assemblages and their relationship with habitat structure

    NASA Astrophysics Data System (ADS)

    Kramer, Michael J.; Bellwood, David R.; Taylor, Richard B.; Bellwood, Orpha

    2017-09-01

    Tropical and temperate marine habitats have long been recognised as fundamentally different system, yet comparative studies are rare, particularly for small organisms such as Crustacea. This study investigates the ecological attributes (abundance, biomass and estimated productivity) of benthic Crustacea in selected microhabitats from a tropical and a temperate location, revealing marked differences in the crustacean assemblages. In general, microhabitats from the tropical location (dead coral, the epilithic algal matrix [algal turfs] and sand) supported high abundances of small individuals (mean length = 0.53 mm vs. 0.96 mm in temperate microhabitats), while temperate microhabitats (the brown seaweed Carpophyllum sp., coralline turf and sand) had substantially greater biomasses of crustaceans and higher estimated productivity rates. In both locations, the most important microhabitats for crustaceans (per unit area) were complex structures: tropical dead coral and temperate Carpophyllum sp. It appears that the differences between microhabitats are largely driven by the size and relative abundance of key crustacean groups. Temperate microhabitats have a higher proportion of relatively large Peracarida (Amphipoda and Isopoda), whereas tropical microhabitats are dominated by small detrital- and microalgal-feeding crustaceans (harpacticoid copepods and ostracods). These differences highlight the vulnerability of tropical and temperate systems to the loss of complex benthic structures and their associated crustacean assemblages.

  16. Dynamic free energy surfaces for sodium diffusion in type II silicon clathrates.

    PubMed

    Slingsby, J G; Rorrer, N A; Krishna, L; Toberer, E S; Koh, C A; Maupin, C M

    2016-02-21

    Earth abundant semiconducting type II Si clathrates have attracted attention as photovoltaic materials due to their wide band gaps. To realize the semiconducting properties of these materials, guest species that arise during the synthesis process must be completely evacuated from the host cage structure post synthesis. A common guest species utilized in the synthesis of Si clathrates is Na (metal), which templates the clathrate cage formation. Previous experimental investigations have identified that it is possible to evacuate Na from type II clathrates to an occupancy of less than 1 Na per unit cell. This work investigates the energetics, kinetics, and resulting mechanism of Na diffusion through type II Si clathrates by means of biased molecular dynamics and kinetic Monte Carlo simulations. Well-tempered metadynamics has been used to determine the potential of mean force for Na moving between clathrate cages, from which the thermodynamic preferences and transition barrier heights have been obtained. Kinetic Monte Carlo simulations based on the metadynamics results have identified the mechanism of Na diffusion in type II Si clathrates. The overall mechanism consists of a coupled diffusive process linked via electrostatic guest-guest interactions. The large occupied hexakaidechedral cages initially empty their Na guests to adjacent empty large cages, thereby changing the local electrostatic environment around the occupied small pentagonal dodecahedral cages and increasing the probability of Na guests to leave the small cages. This coupled process continues through the cross-over point that is identified as the point where large and small cages are equally occupied by Na guests. Further Na removal results in the majority of guests residing in the large cages as opposed to the small cages, in agreement with experiments, and ultimately a Na free structure.

  17. Beyond habitat structure: Landscape heterogeneity explains the monito del monte (Dromiciops gliroides) occurrence and behavior at habitats dominated by exotic trees.

    PubMed

    Salazar, Daniela A; Fontúrbel, Francisco E

    2016-09-01

    Habitat structure determines species occurrence and behavior. However, human activities are altering natural habitat structure, potentially hampering native species due to the loss of nesting cavities, shelter or movement pathways. The South American temperate rainforest is experiencing an accelerated loss and degradation, compromising the persistence of many native species, and particularly of the monito del monte (Dromiciops gliroides Thomas, 1894), an arboreal marsupial that plays a key role as seed disperser. Aiming to compare 2 contrasting habitats (a native forest and a transformed habitat composed of abandoned Eucalyptus plantations and native understory vegetation), we assessed D. gliroides' occurrence using camera traps and measured several structural features (e.g. shrub and bamboo cover, deadwood presence, moss abundance) at 100 camera locations. Complementarily, we used radio telemetry to assess its spatial ecology, aiming to depict a more complete scenario. Moss abundance was the only significant variable explaining D. gliroides occurrence between habitats, and no structural variable explained its occurrence at the transformed habitat. There were no differences in home range, core area or inter-individual overlapping. In the transformed habitats, tracked individuals used native and Eucalyptus-associated vegetation types according to their abundance. Diurnal locations (and, hence, nesting sites) were located exclusively in native vegetation. The landscape heterogeneity resulting from the vicinity of native and Eucalyptus-associated vegetation likely explains D. gliroides occurrence better than the habitat structure itself, as it may be use Eucalyptus-associated vegetation for feeding purposes but depend on native vegetation for nesting. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  18. Diffraction Theory and Almost Periodic Distributions

    NASA Astrophysics Data System (ADS)

    Strungaru, Nicolae; Terauds, Venta

    2016-09-01

    We introduce and study the notions of translation bounded tempered distributions, and autocorrelation for a tempered distribution. We further introduce the spaces of weakly, strongly and null weakly almost periodic tempered distributions and show that for weakly almost periodic tempered distributions the Eberlein decomposition holds. For translation bounded measures all these notions coincide with the classical ones. We show that tempered distributions with measure Fourier transform are weakly almost periodic and that for this class, the Eberlein decomposition is exactly the Fourier dual of the Lebesgue decomposition, with the Fourier-Bohr coefficients specifying the pure point part of the Fourier transform. We complete the project by looking at few interesting examples.

  19. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  20. Tempered glass and thermal shock of ceramic materials

    NASA Technical Reports Server (NTRS)

    Bunnell, L. Roy

    1992-01-01

    A laboratory experiment is described that shows students the different strengths and fracture toughnesses between tempered and untempered glass. This paper also describes how glass is tempered and the materials science aspects of the process.

  1. The tropicalization of temperate marine ecosystems: climate-mediated changes in herbivory and community phase shifts.

    PubMed

    Vergés, Adriana; Steinberg, Peter D; Hay, Mark E; Poore, Alistair G B; Campbell, Alexandra H; Ballesteros, Enric; Heck, Kenneth L; Booth, David J; Coleman, Melinda A; Feary, David A; Figueira, Will; Langlois, Tim; Marzinelli, Ezequiel M; Mizerek, Toni; Mumby, Peter J; Nakamura, Yohei; Roughan, Moninya; van Sebille, Erik; Gupta, Alex Sen; Smale, Dan A; Tomas, Fiona; Wernberg, Thomas; Wilson, Shaun K

    2014-08-22

    Climate-driven changes in biotic interactions can profoundly alter ecological communities, particularly when they impact foundation species. In marine systems, changes in herbivory and the consequent loss of dominant habitat forming species can result in dramatic community phase shifts, such as from coral to macroalgal dominance when tropical fish herbivory decreases, and from algal forests to 'barrens' when temperate urchin grazing increases. Here, we propose a novel phase-shift away from macroalgal dominance caused by tropical herbivores extending their range into temperate regions. We argue that this phase shift is facilitated by poleward-flowing boundary currents that are creating ocean warming hotspots around the globe, enabling the range expansion of tropical species and increasing their grazing rates in temperate areas. Overgrazing of temperate macroalgae by tropical herbivorous fishes has already occurred in Japan and the Mediterranean. Emerging evidence suggests similar phenomena are occurring in other temperate regions, with increasing occurrence of tropical fishes on temperate reefs. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Wear Characteristics and Mechanisms of H13 Steel with Various Tempered Structures

    NASA Astrophysics Data System (ADS)

    Cui, X. H.; Wang, S. Q.; Wei, M. X.; Yang, Z. R.

    2011-08-01

    Wear tests of H13 steel with various tempering microstructures were performed under atmospheric conditions at room temperature (RT), 200 °C, and 400 °C. The wear characteristics and wear mechanisms of various tempered microstructures of the steel were focused by investigating the structure, morphology, and composition of the worn surfaces. Under atmospheric conditions at RT, 200 °C, and 400 °C, adhesive wear, mild oxidation wear, and oxidation wear prevailed, respectively. The wear rate at 200 °C was substantially lower than those at RT and 400 °C due to the protection of tribo-oxides. In mild oxidation wear, the tempered microstructures of the steel presented almost no obvious influence on the wear resistance. However, in adhesive wear and oxidation wear, the wear resistance strongly depended on the tempered microstructures of the steel. The steel tempered at 600-650 °C presented pronouncedly lower wear rates than the one tempered at 200-550 or 700 °C. It can be suggested that the wear resistance of the steel was closely related with its fracture resistance.

  3. ms2: A molecular simulation tool for thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran

    2011-11-01

    This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.

  4. Digital and optical shape representation and pattern recognition; Proceedings of the Meeting, Orlando, FL, Apr. 4-6, 1988

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Editor)

    1988-01-01

    The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.

  5. MOST: A Powerful Tool to Reveal the True Nature of the Mysterious Dust-Forming Wolf-Rayet Binary CV Ser

    NASA Astrophysics Data System (ADS)

    David-Uraz, A.; Moffat, A. F. J.; Chené, A.-N.; MOST Collaboration

    2012-12-01

    The WR + O binary CV Ser has been a source of mystery since it was shown that its atmospheric eclipses change with time over decades, in addition to its sporadic dust production. However, the first high-precision time-dependent photometric observations obtained with the MOST space telescope in 2009 show two consecutive eclipses over the 29 day orbit, with varying depths. A subsequent MOST run in 2010 showed a somewhat asymmetric eclipse profile. Parallel optical spectroscopy was obtained from the Observatoire du Mont-Mégantic (2009 and 2010) and from the Dominion Astrophysical Observatory (2009).

  6. Using the orbiting companion to trace WR wind structures in the 29d WC8d + O8-9IV binary CV Ser

    NASA Astrophysics Data System (ADS)

    David-Uraz, Alexandre; Moffat, Anthony F. J.

    2011-07-01

    We have used continuous, high-precision, broadband visible photometry from the MOST satellite to trace wind structures in the WR component of CV Ser over more than a full orbit. Most of the small-scale light-curve variations are likely due to extinction by clumps along the line of sight to the O companion as it orbits and shines through varying columns of the WR wind. Parallel optical spectroscopy from the Mont Megantic Observatory is used to refine the orbital and wind-collision parameters, as well as to reveal line emission from clumps.

  7. Tracing WR wind structures by using the orbiting companion in the 29d WC8d + O8-9IV binary CV Ser

    NASA Astrophysics Data System (ADS)

    David-Uraz, Alexandre; Moffat, Anthony F. J.; Chené, André Nicolas; Lange, Nicholas

    2011-01-01

    We have obtained continuous, high-precision, broadband visible photometry from the MOST satellite of CV Ser over more than a full orbit in order to link the small-scale light-curve variations to extinction due to wind structures in the WR component, thus permitting us to trace these structures. The light-curve presented unexpected characteristics, in particular eclipses with a varying depth. Parallel optical spectroscopy from the Mont Megantic Observatory and Dominion Astrophysical Observatory was obtained to refine the orbital and wind-collision parameters, as well as to reveal line emission from clumps.

  8. ORBS: A reduction software for SITELLE and SpiOMM data

    NASA Astrophysics Data System (ADS)

    Martin, Thomas

    2014-09-01

    ORBS merges, corrects, transforms and calibrates interferometric data cubes and produces a spectral cube of the observed region for analysis. It is a fully automatic data reduction software for use with SITELLE (installed at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont Mégantic); these imaging Fourier transform spectrometers obtain a hyperspectral data cube which samples a 12 arc-minutes field of view into 4 millions of visible spectra. ORBS is highly parallelized; its core classes (ORB) have been designed to be used in a suite of softwares for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS).

  9. Fast Photon Monte Carlo for Water Cherenkov Detectors

    NASA Astrophysics Data System (ADS)

    Latorre, Anthony; Seibert, Stanley

    2012-03-01

    We present Chroma, a high performance optical photon simulation for large particle physics detectors, such as the water Cerenkov far detector option for LBNE. This software takes advantage of the CUDA parallel computing platform to propagate photons using modern graphics processing units. In a computer model of a 200 kiloton water Cerenkov detector with 29,000 photomultiplier tubes, Chroma can propagate 2.5 million photons per second, around 200 times faster than the same simulation with Geant4. Chroma uses a surface based approach to modeling geometry which offers many benefits over a solid based modelling approach which is used in other simulations like Geant4.

  10. Charge order-superfluidity transition in a two-dimensional system of hard-core bosons and emerging domain structures

    NASA Astrophysics Data System (ADS)

    Moskvin, A. S.; Panov, Yu. D.; Rybakov, F. N.; Borisov, A. B.

    2017-11-01

    We have used high-performance parallel computations by NVIDIA graphics cards applying the method of nonlinear conjugate gradients and Monte Carlo method to observe directly the developing ground state configuration of a two-dimensional hard-core boson system with decrease in temperature, and its evolution with deviation from a half-filling. This has allowed us to explore unconventional features of a charge order—superfluidity phase transition, specifically, formation of an irregular domain structure, emergence of a filamentary superfluid structure that condenses within of the charge-ordered phase domain antiphase boundaries, and formation and evolution of various topological structures.

  11. Exploring the Energy Landscapes of Protein Folding Simulations with Bayesian Computation

    PubMed Central

    Burkoff, Nikolas S.; Várnai, Csilla; Wells, Stephen A.; Wild, David L.

    2012-01-01

    Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. PMID:22385859

  12. Exploring the energy landscapes of protein folding simulations with Bayesian computation.

    PubMed

    Burkoff, Nikolas S; Várnai, Csilla; Wells, Stephen A; Wild, David L

    2012-02-22

    Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. CUDA programs for the GPU computing of the Swendsen-Wang multi-cluster spin flip algorithm: 2D and 3D Ising, Potts, and XY models

    NASA Astrophysics Data System (ADS)

    Komura, Yukihiro; Okabe, Yutaka

    2014-03-01

    We present sample CUDA programs for the GPU computing of the Swendsen-Wang multi-cluster spin flip algorithm. We deal with the classical spin models; the Ising model, the q-state Potts model, and the classical XY model. As for the lattice, both the 2D (square) lattice and the 3D (simple cubic) lattice are treated. We already reported the idea of the GPU implementation for 2D models (Komura and Okabe, 2012). We here explain the details of sample programs, and discuss the performance of the present GPU implementation for the 3D Ising and XY models. We also show the calculated results of the moment ratio for these models, and discuss phase transitions. Catalogue identifier: AERM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5632 No. of bytes in distributed program, including test data, etc.: 14688 Distribution format: tar.gz Programming language: C, CUDA. Computer: System with an NVIDIA CUDA enabled GPU. Operating system: System with an NVIDIA CUDA enabled GPU. Classification: 23. External routines: NVIDIA CUDA Toolkit 3.0 or newer Nature of problem: Monte Carlo simulation of classical spin systems. Ising, q-state Potts model, and the classical XY model are treated for both two-dimensional and three-dimensional lattices. Solution method: GPU-based Swendsen-Wang multi-cluster spin flip Monte Carlo method. The CUDA implementation for the cluster-labeling is based on the work by Hawick et al. [1] and that by Kalentev et al. [2]. Restrictions: The system size is limited depending on the memory of a GPU. Running time: For the parameters used in the sample programs, it takes about a minute for each program. Of course, it depends on the system size, the number of Monte Carlo steps, etc. References: [1] K.A. Hawick, A. Leist, and D. P. Playne, Parallel Computing 36 (2010) 655-678 [2] O. Kalentev, A. Rai, S. Kemnitzb, and R. Schneider, J. Parallel Distrib. Comput. 71 (2011) 615-620

  14. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    PubMed

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  15. Rapid Thermal Processing to Enhance Steel Toughness.

    PubMed

    Judge, V K; Speer, J G; Clarke, K D; Findley, K O; Clarke, A J

    2018-01-11

    Quenching and Tempering (Q&T) has been utilized for decades to alter steel mechanical properties, particularly strength and toughness. While tempering typically increases toughness, a well-established phenomenon called tempered martensite embrittlement (TME) is known to occur during conventional Q&T. Here we show that short-time, rapid tempering can overcome TME to produce unprecedented property combinations that cannot be attained by conventional Q&T. Toughness is enhanced over 43% at a strength level of 1.7 GPa and strength is improved over 0.5 GPa at an impact toughness of 30 J. We also show that hardness and the tempering parameter (TP), developed by Holloman and Jaffe in 1945 and ubiquitous within the field, is insufficient for characterizing measured strengths, toughnesses, and microstructural conditions after rapid processing. Rapid tempering by energy-saving manufacturing processes like induction heating creates the opportunity for new Q&T steels for energy, defense, and transportation applications.

  16. Northern Hemisphere Biome-and Process-Specific Changes in Forest Area and Gross Merchantable Volume: 1890-1990 (DB1017)

    DOE Data Explorer

    Auclair, A.N.D. [Science and Policy Associates, Inc., Washington, D.C. (United States; Bedford, J.A. [Science and Policy Associates, Inc., Washington, D.C. (United States); Revenga, C. [Science and Policy Associates, Inc., Washington, D.C. (United States); Brenkert, A.L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    1997-01-01

    This database lists annual changes in areal extent (Ha) and gross merchantable wood volume (m3) produced by depletion and accrual processes in boreal and temperate forests in Alaska, Canada, Europe, Former Soviet Union, Non-Soviet temperate Asia, and the contiguous United States for the years 1890 through 1990. Forest depletions (source terms for atmospheric CO2) are identified as forest pests, forest dieback, forest fires, forest harvest, and land-use changes (predominantly the conversion of forest, temperate woodland, and shrubland to cropland). Forest accruals (sink terms for atmospheric CO2) are identified as fire exclusion, fire suppression, and afforestation or crop abandonment. The changes in areal extent and gross merchantable wood volume are calculated separately for each of the following biomes: forest tundra, boreal softwoods, mixed hardwoods, temperate softwoods, temperate hardwoods, and temperate wood- and shrublands.

  17. Organic and inorganic nitrogen uptake by 21 dominant tree species in temperate and tropical forests.

    PubMed

    Liu, Min; Li, Changcheng; Xu, Xingliang; Wanek, Wolfgang; Jiang, Ning; Wang, Huimin; Yang, Xiaodong

    2017-11-01

    Evidence shows that many tree species can take up organic nitrogen (N) in the form of free amino acids from soils, but few studies have been conducted to compare organic and inorganic N uptake patterns in temperate and tropical tree species in relation to mycorrhizal status and successional state. We labeled intact tree roots by brief 15N exposures using field hydroponic experiments in a temperate forest and a tropical forest in China. A total of 21 dominant tree species were investigated, 8 in the temperate forest and 13 in the tropical forest. All investigated tree species showed highest uptake rates for NH4+ (ammonium), followed by glycine and NO3- (nitrate). Uptake of NH4+ by temperate trees averaged 12.8 μg N g-1 dry weight (d.w.) root h-1, while those by tropical trees averaged 6.8 μg N g-1 d.w. root h-1. Glycine uptake rates averaged 3.1 μg N g-1 d.w. root h-1 for temperate trees and 2.4 μg N g-1 d.w. root h-1 for tropical trees. NO3- uptake was the lowest (averaging 0.8 μg N g-1 d.w. root h-1 for temperate trees and 1.2 μg N g-1 d.w. root h-1 for tropical trees). Uptake of NH4+ accounted for 76% of the total uptake of all three N forms in the temperate forest and 64% in the tropical forest. Temperate tree species had similar glycine uptake rates as tropical trees, with the contribution being slightly lower (20% in the temperate forest and 23% in the tropical forest). All tree species investigated in the temperate forest were ectomycorrhizal and all species but one in the tropical forest were arbuscular mycorrhizal (AM). Ectomycorrhizal trees showed significantly higher NH4+ and lower NO3- uptake rates than AM trees. Mycorrhizal colonization rates significantly affected uptake rates and contributions of NO3- or NH4+, but depended on forest types. We conclude that tree species in both temperate and tropical forests preferred to take up NH4+, with organic N as the second most important N source. These findings suggest that temperate and tropical forests demonstrate similar N uptake patterns although they differ in physiology of trees and soil biogeochemical processes. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. The Effect of Rare-Earth Metals on Cast Steels

    DTIC Science & Technology

    1954-04-01

    as the 1-inch section is also illustrated in Figure 23 and consists of tempered bainite and tempered martensite. Both of the control steels (AE-1...section Tempered bainite and tempered martensite 4 inch section Figure 23 Microstructure ol the Mn-Cr-Mo base control steels . Etched with... bainite 4-inch Section Figure 25—Microstructures of the MnCr-Mo + Rare Earths f B cast steels . Etched with picral, SOOX - .1 €. Figure 26

  19. Method to Predict Tempering of Steels Under Non-isothermal Conditions

    NASA Astrophysics Data System (ADS)

    Poirier, D. R.; Kohli, A.

    2017-05-01

    A common way of representing the tempering responses of steels is with a "tempering parameter" that includes the effect of temperature and time on hardness after hardening. Such functions, usually in graphical form, are available for many steels and have been applied for isothermal tempering. In this article, we demonstrate that the method can be extended to non-isothermal conditions. Controlled heating experiments were done on three grades in order to verify the method.

  20. Effect of twice quenching and tempering on the mechanical properties and microstructures of SCRAM steel for fusion application

    NASA Astrophysics Data System (ADS)

    Xiong, Xuesong; Yang, Feng; Zou, Xingrong; Suo, Jinping

    2012-11-01

    The effect of twice quenching and tempering on the mechanical properties and microstructures of SCRAM steel was investigated. The results from tensile tests showed that whether twice quenching and tempering processes(1253 K/0.5 h/W.C(water cool) + 1033 K/2 h/A.C(air cool) + 1233 K/0.5 h/W.C + 1033 K/2 h/A.C named after 2Q&2TI, and 1253 K/0.5 h/W.C + 1033 K/2 h/A.C + 1233 K/0.5 h/W.C + 1013 K/2 h/A.C named after 2Q&2TII)increased strength of steel or not depended largely on the second tempering temperature compared to quenching and tempering process(1253 K/0.5 h/W.C + 1033 K/2 h/A.C named after 1Q&1T). Charpy V-notch impact tests indicated that twice quenching and tempering processes reduced the ductile brittle transition temperature (DBTT). Microstructure inspection revealed that the prior austenitic grain size and martensite lath width were refined after twice quenching and tempering treatments. Precipitate growth was inhibited by a slight decrease of the second tempering temperature from 1033 to 1013 K. The finer average size of precipitates is considered to be the main possible reason for the higher strength and lower DBTT of 2Q&2TII compared with 2Q&2TI.

  1. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  2. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    NASA Astrophysics Data System (ADS)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.

  3. A quantitative three-dimensional dose attenuation analysis around Fletcher-Suit-Delclos due to stainless steel tube for high-dose-rate brachytherapy by Monte Carlo calculations.

    PubMed

    Parsai, E Ishmael; Zhang, Zhengdong; Feldmeier, John J

    2009-01-01

    The commercially available brachytherapy treatment-planning systems today, usually neglects the attenuation effect from stainless steel (SS) tube when Fletcher-Suit-Delclos (FSD) is used in treatment of cervical and endometrial cancers. This could lead to potential inaccuracies in computing dwell times and dose distribution. A more accurate analysis quantifying the level of attenuation for high-dose-rate (HDR) iridium 192 radionuclide ((192)Ir) source is presented through Monte Carlo simulation verified by measurement. In this investigation a general Monte Carlo N-Particles (MCNP) transport code was used to construct a typical geometry of FSD through simulation and compare the doses delivered to point A in Manchester System with and without the SS tubing. A quantitative assessment of inaccuracies in delivered dose vs. the computed dose is presented. In addition, this investigation expanded to examine the attenuation-corrected radial and anisotropy dose functions in a form parallel to the updated AAPM Task Group No. 43 Report (AAPM TG-43) formalism. This will delineate quantitatively the inaccuracies in dose distributions in three-dimensional space. The changes in dose deposition and distribution caused by increased attenuation coefficient resulted from presence of SS are quantified using MCNP Monte Carlo simulations in coupled photon/electron transport. The source geometry was that of the Vari Source wire model VS2000. The FSD was that of the Varian medical system. In this model, the bending angles of tandem and colpostats are 15 degrees and 120 degrees , respectively. We assigned 10 dwell positions to the tandem and 4 dwell positions to right and left colpostats or ovoids to represent a typical treatment case. Typical dose delivered to point A was determined according to Manchester dosimetry system. Based on our computations, the reduction of dose to point A was shown to be at least 3%. So this effect presented by SS-FSD systems on patient dose is of concern.

  4. The Phylogeny and Biogeographic History of Ashes (Fraxinus, Oleaceae) Highlight the Roles of Migration and Vicariance in the Diversification of Temperate Trees

    PubMed Central

    Hinsinger, Damien Daniel; Basak, Jolly; Gaudeul, Myriam; Cruaud, Corinne; Bertolino, Paola; Frascaria-Lacoste, Nathalie; Bousquet, Jean

    2013-01-01

    The cosmopolitan genus Fraxinus, which comprises about 40 species of temperate trees and shrubs occupying various habitats in the Northern Hemisphere, represents a useful model to study speciation in long-lived angiosperms. We used nuclear external transcribed spacers (nETS), phantastica gene sequences, and two chloroplast loci (trnH-psbA and rpl32-trnL) in combination with previously published and newly obtained nITS sequences to produce a time-calibrated multi-locus phylogeny of the genus. We then inferred the biogeographic history and evolution of floral morphology. An early dispersal event could be inferred from North America to Asia during the Oligocene, leading to the diversification of the section Melioides sensus lato. Another intercontinental dispersal originating from the Eurasian section of Fraxinus could be dated from the Miocene and resulted in the speciation of F. nigra in North America. In addition, vicariance was inferred to account for the distribution of the other Old World species (sections Sciadanthus, Fraxinus and Ornus). Geographic speciation likely involving dispersal and vicariance could also be inferred from the phylogenetic grouping of geographically close taxa. Molecular dating suggested that the initial divergence of the taxonomical sections occurred during the middle and late Eocene and Oligocene periods, whereas diversification within sections occurred mostly during the late Oligocene and Miocene, which is consistent with the climate warming and accompanying large distributional changes observed during these periods. These various results underline the importance of dispersal and vicariance in promoting geographic speciation and diversification in Fraxinus. Similarities in life history, reproductive and demographic attributes as well as geographical distribution patterns suggest that many other temperate trees should exhibit similar speciation patterns. On the other hand, the observed parallel evolution and reversions in floral morphology would imply a major influence of environmental pressure. The phylogeny obtained and its biogeographical implications should facilitate future studies on the evolution of complex adaptive characters, such as habitat preference, and their possible roles in promoting divergent evolution in trees. PMID:24278282

  5. The phylogeny and biogeographic history of ashes (fraxinus, oleaceae) highlight the roles of migration and vicariance in the diversification of temperate trees.

    PubMed

    Hinsinger, Damien Daniel; Basak, Jolly; Gaudeul, Myriam; Cruaud, Corinne; Bertolino, Paola; Frascaria-Lacoste, Nathalie; Bousquet, Jean

    2013-01-01

    The cosmopolitan genus Fraxinus, which comprises about 40 species of temperate trees and shrubs occupying various habitats in the Northern Hemisphere, represents a useful model to study speciation in long-lived angiosperms. We used nuclear external transcribed spacers (nETS), phantastica gene sequences, and two chloroplast loci (trnH-psbA and rpl32-trnL) in combination with previously published and newly obtained nITS sequences to produce a time-calibrated multi-locus phylogeny of the genus. We then inferred the biogeographic history and evolution of floral morphology. An early dispersal event could be inferred from North America to Asia during the Oligocene, leading to the diversification of the section Melioides sensus lato. Another intercontinental dispersal originating from the Eurasian section of Fraxinus could be dated from the Miocene and resulted in the speciation of F. nigra in North America. In addition, vicariance was inferred to account for the distribution of the other Old World species (sections Sciadanthus, Fraxinus and Ornus). Geographic speciation likely involving dispersal and vicariance could also be inferred from the phylogenetic grouping of geographically close taxa. Molecular dating suggested that the initial divergence of the taxonomical sections occurred during the middle and late Eocene and Oligocene periods, whereas diversification within sections occurred mostly during the late Oligocene and Miocene, which is consistent with the climate warming and accompanying large distributional changes observed during these periods. These various results underline the importance of dispersal and vicariance in promoting geographic speciation and diversification in Fraxinus. Similarities in life history, reproductive and demographic attributes as well as geographical distribution patterns suggest that many other temperate trees should exhibit similar speciation patterns. On the other hand, the observed parallel evolution and reversions in floral morphology would imply a major influence of environmental pressure. The phylogeny obtained and its biogeographical implications should facilitate future studies on the evolution of complex adaptive characters, such as habitat preference, and their possible roles in promoting divergent evolution in trees.

  6. Status of marine biodiversity of the China seas.

    PubMed

    Liu, J Y

    2013-01-01

    China's seas cover nearly 5 million square kilometers extending from the tropical to the temperate climate zones and bordering on 32,000 km of coastline, including islands. Comprehensive systematic study of the marine biodiversity within this region began in the early 1950s with the establishment of the Qingdao Marine Biological Laboratory of the Chinese Academy of Sciences. Since that time scientists have carried out intensive multidisciplinary research on marine life in the China seas and have recorded 22,629 species belonging to 46 phyla. The marine flora and fauna of the China seas are characterized by high biodiversity, including tropical and subtropical elements of the Indo-West Pacific warm-water fauna in the South and East China seas, and temperate elements of North Pacific temperate fauna mainly in the Yellow Sea. The southern South China Sea fauna is characterized by typical tropical elements paralleled with the Philippine-New Guinea-Indonesia Coral triangle typical tropical faunal center. This paper summarizes advances in studies of marine biodiversity in China's seas and discusses current research mainly on characteristics and changes in marine biodiversity, including the monitoring, assessment, and conservation of endangered species and particularly the strengthening of effective management. Studies of (1) a tidal flat in a semi-enclosed embayment, (2) the impact of global climate change on a cold-water ecosystem, (3) coral reefs of Hainan Island and Xisha-Nansha atolls, (4) mangrove forests of the South China Sea, (5) a threatened seagrass field, and (6) an example of stock enhancement practices of the Chinese shrimp fishery are briefly introduced. Besides the overexploitation of living resources (more than 12.4 million tons yielded in 2007), the major threat to the biodiversity of the China seas is environmental deterioration (pollution, coastal construction), particularly in the brackish waters of estuarine environments, which are characterized by high productivity and represent spawning and nursery areas for several economically important species. In the long term, climate change is also a major threat. Finally, challenges in marine biodiversity studies are briefly discussed along with suggestions to strengthen the field. Since 2004, China has participated in the Census of Marine Life, through which advances in the study of zooplankton and zoobenthos biodiversity were finally summarized.

  7. Acoustic emission-microstructural relationships in ferritic steels. Part 2: The effect of tempering

    NASA Astrophysics Data System (ADS)

    Scruby, C. B.; Wadley, H. N. G.

    1985-07-01

    Tempering of Fe-3.25 wt%Ni alloys with carbon contents of between 0.057 and 0.49 wt% leads to a pronounced acoustic emission activity during ambient temperature tensile testing. The maximum emission occurs from samples tempered approx. 250 deg C and appears only weakly influenced by carbon content. Mechanical property determinations link the maximum to a precipitation hardening effect. A model involving the cooperative motion of dislocations over distances corresponding to the lath-packet dimension is proposed. The mechanism responsible for cooperative motion is believed to be a precipitate shearing process, the first time such a process has been proposed for quenched and tempered ferritic steels. A second, much weaker source of emission has been identified in material subjected to prolonged tempering at 625 deg C. The mechanism responsible for this emission is believed to be the sudden multiplication and propagation of dislocations during microyield events. No evidence has been found to support the view that carbide fracture in quenched and tempered steels is a direct source of acoustic emission. The microstructural states in which most quenched and tempered steels are used in practice, generate very little detectable acoustic emission either during deformation or fracture, irrespective of carbon content.

  8. Springback Foam

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A decade ago, NASA's Ames Research Center developed a new foam material for protective padding of airplane seats. Now known as Temper Foam, the material has become one of the most widely-used spinoffs. Latest application is a line of Temper Foam cushioning produced by Edmont-Wilson, Coshocton, Ohio for office and medical furniture. The example pictured is the Classic Dental Stool, manufactured by Dentsply International, Inc., York, Pennsylvania, one of four models which use Edmont-Wilson Temper Foam. Temper Foam is an open-cell, flameresistant foam with unique qualities.

  9. Effect of Tempering and Baking on the Charpy Impact Energy of Hydrogen-Charged 4340 Steel

    NASA Astrophysics Data System (ADS)

    Mori, K.; Lee, E. W.; Frazier, W. E.; Niji, K.; Battel, G.; Tran, A.; Iriarte, E.; Perez, O.; Ruiz, H.; Choi, T.; Stoyanov, P.; Ogren, J.; Alrashaid, J.; Es-Said, O. S.

    2015-01-01

    Tempered AISI 4340 steel was hydrogen charged and tested for impact energy. It was found that samples tempered above 468 °C (875 °F) and subjected to hydrogen charging exhibited lower impact energy values when compared to uncharged samples. No significant difference between charged and uncharged samples tempered below 468 °C (875 °F) was observed. Neither exposure nor bake time had any significant effect on impact energy within the tested ranges.

  10. Investigation of the Microstructural Changes and Hardness Variations of Sub-Zero Treated Cr-V Ledeburitic Tool Steel Due to the Tempering Treatment

    NASA Astrophysics Data System (ADS)

    Jurči, Peter; Dománková, Mária; Ptačinová, Jana; Pašák, Matej; Kusý, Martin; Priknerová, Petra

    2018-03-01

    The microstructure and tempering response of Cr-V ledeburitic steel Vanadis 6 subjected to sub-zero treatment at - 196 °C for 4 h have been examined with reference to the same steel after conventional heat treatment. The obtained experimental results infer that sub-zero treatment significantly reduces the retained austenite amount, makes an overall refinement of microstructure, and induces a significant increase in the number and population density of small globular carbides with a size 100-500 nm. At low tempering temperatures, the transient M3C-carbides precipitated, whereas their number was enhanced by sub-zero treatment. The presence of chromium-based M7C3 precipitates was evidenced after tempering at the temperature of normal secondary hardening; this phase was detected along with the M3C. Tempering above 470 °C converts almost all the retained austenite in conventionally quenched specimens while the transformation of retained austenite is rather accelerated in sub-zero treated material. As a result of tempering, a decrease in the population density of small globular carbides was recorded; however, the number of these particles retained much higher in sub-zero treated steel. Elevated hardness of sub-zero treated steel can be referred to more completed martensitic transformation and enhanced number of small globular carbides; this state is retained up to a tempering temperature of around 500 °C in certain extent. Correspondingly, lower as-tempered hardness of sub-zero treated steel tempered above 500 °C is referred to much lower contribution of the transformation of retained austenite, and to an expectedly lower amount of precipitated alloy carbides.

  11. Variations in the microstructure and properties of Mn-Ti multiple-phase steel with high strength under different tempering temperatures

    NASA Astrophysics Data System (ADS)

    Li, Dazhao; Li, Xiaonan; Cui, Tianxie; Li, Jianmin; Wang, Yutian; Fu, Peimao

    2015-03-01

    There are few relevant researches on coils by tempering, and the variations of microstructure and properties of steel coil during the tempering process also remain unclear. By using thermo-mechanical control process(TMCP) technology, Mn-Ti typical HSLA steel coils with yield strength of 920 MPa are produced on the 2250 hot rolling production line. Then, the samples are taken from the coils and tempered at the temperatures of 220 °C, 350 °C, and 620 °C respectively. After tempering the strength, ductility and toughness of samples are tested, and meanwhile microstructures are investigated. Precipitates initially emerge inside the ferrite laths and the density of the dislocation drops. Then, the lath-shaped ferrites begin to gather, and the retained austenite films start to decompose. Finally, the retained austenite films are completely decomposed into coarse and short rod-shape precipitates composed of C and Ti compounds. The yield strength increases with increasing tempering temperature due to the pinning effect of the precipitates, and the dislocation density decreases. The yield strength is highest when the steel is tempered at 220 °C because of pinning of the precipitates to dislocations. The total elongation increases in all samples because of the development of ferrites during tempering. The tensile strength and impact absorbed energy decline because the effect of impeding crack propagation weakens as the retained austenite films completely decompose and the precipitates coarsen. This paper clarifies the influence of different tempering temperatures on phase transformation characteristics and process of Mn-Ti typical multiphase steels, as well as its resulting performance variation rules.

  12. Beam quality corrections for parallel-plate ion chambers in electron reference dosimetry

    NASA Astrophysics Data System (ADS)

    Zink, K.; Wulff, J.

    2012-04-01

    Current dosimetry protocols (AAPM, IAEA, IPEM, DIN) recommend parallel-plate ionization chambers for dose measurements in clinical electron beams. This study presents detailed Monte Carlo simulations of beam quality correction factors for four different types of parallel-plate chambers: NACP-02, Markus, Advanced Markus and Roos. These chambers differ in constructive details which should have notable impact on the resulting perturbation corrections, hence on the beam quality corrections. The results reveal deviations to the recommended beam quality corrections given in the IAEA TRS-398 protocol in the range of 0%-2% depending on energy and chamber type. For well-guarded chambers, these deviations could be traced back to a non-unity and energy-dependent wall perturbation correction. In the case of the guardless Markus chamber, a nearly energy-independent beam quality correction is resulting as the effects of wall and cavity perturbation compensate each other. For this chamber, the deviations to the recommended values are the largest and may exceed 2%. From calculations of type-B uncertainties including effects due to uncertainties of the underlying cross-sectional data as well as uncertainties due to the chamber material composition and chamber geometry, the overall uncertainty of calculated beam quality correction factors was estimated to be <0.7%. Due to different chamber positioning recommendations given in the national and international dosimetry protocols, an additional uncertainty in the range of 0.2%-0.6% is present. According to the IAEA TRS-398 protocol, the uncertainty in clinical electron dosimetry using parallel-plate ion chambers is 1.7%. This study may help to reduce this uncertainty significantly.

  13. Microstructural, mechanical and tribological investigation of 30CrMnSiNi2A ultra-high strength steel under various tempering temperatures

    NASA Astrophysics Data System (ADS)

    Arslan Hafeez, Muhammad; Farooq, Ameeq

    2018-01-01

    The aim of the research was to investigate the variation in microstructural, mechanical and tribological characteristics of 30CrMnSiNi2A ultra-high strength steel as a function of tempering temperatures. Steel was quenched at 880 °C and tempered at five different tempering temperatures ranging from 250 °C to 650 °C. Optical microscopy and pin on disc tribometer was used to evaluate the microstructural and wear properties. Results show that characteristics of 30CrMnSiNi2A are highly sensitive to tempering temperatures. Lathe and plate shaped martensite obtained by quenching transform first into ε-carbide, second cementite, third coarsened and spheroidized cementite and finally into recovered ferrite and austenite. Hardness, tensile and yield strengths decreased while elongation increased with tempering temperatures. On the other hand, wear rate first markedly decreased and then increased. Optimum amalgamation of characteristics was achieved at 350 °C.

  14. GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.

    PubMed

    Liu, Yangchuan; Tang, Yuguo; Gao, Xin

    2017-12-01

    The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.

  15. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Angular dependence of the nanoDot OSL dosimeter.

    PubMed

    Kerns, James R; Kry, Stephen F; Sahoo, Narayan; Followill, David S; Ibbott, Geoffrey S

    2011-07-01

    Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight/OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system. Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, as well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX. When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found. A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions.

  17. Angular dependence of the nanoDot OSL dosimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, James R.; Kry, Stephen F.; Sahoo, Narayan

    Purpose: Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight/OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system. Methods: Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, asmore » well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX. Results: When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found. Conclusions: A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions.« less

  18. Angular dependence of the nanoDot OSL dosimeter

    PubMed Central

    Kerns, James R.; Kry, Stephen F.; Sahoo, Narayan; Followill, David S.; Ibbott, Geoffrey S.

    2011-01-01

    Purpose: Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight∕OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system.Methods: Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, as well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX.Results: When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found.Conclusions: A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions. PMID:21858992

  19. NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems

    NASA Astrophysics Data System (ADS)

    Pietrzak, Jakub; Kacperski, Krzysztof; Cieślar, Marek

    2015-03-01

    The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.

  20. The grasshopper problem

    NASA Astrophysics Data System (ADS)

    Goulko, Olga; Kent, Adrian

    2017-11-01

    We introduce and physically motivate the following problem in geometric combinatorics, originally inspired by analysing Bell inequalities. A grasshopper lands at a random point on a planar lawn of area 1. It then jumps once, a fixed distance d, in a random direction. What shape should the lawn be to maximize the chance that the grasshopper remains on the lawn after jumping? We show that, perhaps surprisingly, a disc-shaped lawn is not optimal for any d>0. We investigate further by introducing a spin model whose ground state corresponds to the solution of a discrete version of the grasshopper problem. Simulated annealing and parallel tempering searches are consistent with the hypothesis that, for d<π-1/2, the optimal lawn resembles a cogwheel with n cogs, where the integer n is close to π (arcsin⁡(√{π }d / 2 )) -1. We find transitions to other shapes for d ≳π-1 / 2.

Top