Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
An update on the BQCD Hybrid Monte Carlo program
NASA Astrophysics Data System (ADS)
Haar, Taylor Ryan; Nakamura, Yoshifumi; Stüben, Hinnerk
2018-03-01
We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n) for K, cSW and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J. Jr.
1977-12-01
Four H/sub 2/O-moderated, slightly-enriched-uranium critical experiments were analyzed by Monte Carlo methods with ENDF/B-IV data. These were simple metal-rod lattices comprising Cross Section Evaluation Working Group thermal reactor benchmarks TRX-1 through TRX-4. Generally good agreement with experiment was obtained for calculated integral parameters: the epi-thermal/thermal ratio of U238 capture (rho/sup 28/) and of U235 fission (delta/sup 25/), the ratio of U238 capture to U235 fission (CR*), and the ratio of U238 fission to U235 fission (delta/sup 28/). Full-core Monte Carlo calculations for two lattices showed good agreement with cell Monte Carlo-plus-multigroup P/sub l/ leakage corrections. Newly measured parameters for themore » low energy resonances of U238 significantly improved rho/sup 28/. In comparison with other CSEWG analyses, the strong correlation between K/sub eff/ and rho/sup 28/ suggests that U238 resonance capture is the major problem encountered in analyzing these lattices.« less
Proceedings of the Nuclear Criticality Technology Safety Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rene G. Sanchez
1998-04-01
This document contains summaries of most of the papers presented at the 1995 Nuclear Criticality Technology Safety Project (NCTSP) meeting, which was held May 16 and 17 at San Diego, Ca. The meeting was broken up into seven sessions, which covered the following topics: (1) Criticality Safety of Project Sapphire; (2) Relevant Experiments For Criticality Safety; (3) Interactions with the Former Soviet Union; (4) Misapplications and Limitations of Monte Carlo Methods Directed Toward Criticality Safety Analyses; (5) Monte Carlo Vulnerabilities of Execution and Interpretation; (6) Monte Carlo Vulnerabilities of Representation; and (7) Benchmark Comparisons.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less
92 Years of the Ising Model: A High Resolution Monte Carlo Study
NASA Astrophysics Data System (ADS)
Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.
2018-04-01
Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.
Criticality Calculations with MCNP6 - Practical Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Thomas B. Lynch; Jeffrey H. Gove
2014-01-01
The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...
Variational Approach to Monte Carlo Renormalization Group
NASA Astrophysics Data System (ADS)
Wu, Yantao; Car, Roberto
2017-12-01
We present a Monte Carlo method for computing the renormalized coupling constants and the critical exponents within renormalization theory. The scheme, which derives from a variational principle, overcomes critical slowing down, by means of a bias potential that renders the coarse grained variables uncorrelated. The two-dimensional Ising model is used to illustrate the method.
Vega roll and attitude control system algorithms trade-off study
NASA Astrophysics Data System (ADS)
Paulino, N.; Cuciniello, G.; Cruciani, I.; Corraro, F.; Spallotta, D.; Nebula, F.
2013-12-01
This paper describes the trade-off study for the selection of the most suitable algorithms for the Roll and Attitude Control System (RACS) within the FPS-A program, aimed at developing the new Flight Program Software of VEGA Launcher. Two algorithms were analyzed: Switching Lines (SL) and Quaternion Feedback Regulation. Using a development simulation tool that models two critical flight phases (Long Coasting Phase (LCP) and Payload Release (PLR) Phase), both algorithms were assessed with Monte Carlo batch simulations for both of the phases. The statistical outcomes of the results demonstrate a 100 percent success rate for Quaternion Feedback Regulation, and support the choice of this method.
Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.
Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji
2015-12-01
We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.
Lecture Notes on Criticality Safety Validation Using MCNP & Whisper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2016-04-01
We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.
Finite-size scaling study of the two-dimensional Blume-Capel model
NASA Astrophysics Data System (ADS)
Beale, Paul D.
1986-02-01
The phase diagram of the two-dimensional Blume-Capel model is investigated by using the technique of phenomenological finite-size scaling. The location of the tricritical point and the values of the critical and tricritical exponents are determined. The location of the tricritical point (Tt=0.610+/-0.005, Dt=1.9655+/-0.0010) is well outside the error bars for the value quoted in previous Monte Carlo simulations but in excellent agreement with more recent Monte Carlo renormalization-group results. The values of the critical and tricritical exponents, with the exception of the leading thermal tricritical exponent, are in excellent agreement with previous calculations, conjectured values, and Monte Carlo renormalization-group studies.
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
2006-01-01
Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…
NASA Astrophysics Data System (ADS)
Infantino, Angelo; Alía, Rubén García; Besana, Maria Ilaria; Brugger, Markus; Cerutti, Francesco
2017-09-01
As part of its post-LHC high energy physics program, CERN is conducting a study for a new proton-proton collider, called Future Circular Collider (FCC-hh), running at center-of-mass energies of up to 100 TeV in a new 100 km tunnel. The study includes a 90-350 GeV lepton collider (FCC-ee) as well as a lepton-hadron option (FCC-he). In this work, FLUKA Monte Carlo simulation was extensively used to perform a first evaluation of the radiation environment in critical areas for electronics in the FCC-hh tunnel. The model of the tunnel was created based on the original civil engineering studies already performed and further integrated in the existing FLUKA models of the beam line. The radiation levels in critical areas, such as the racks for electronics and cables, power converters, service areas, local tunnel extensions was evaluated.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
NASA Technical Reports Server (NTRS)
Pinckney, John
2010-01-01
With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Critical and compensation phenomena in a mixed-spin ternary alloy: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Žukovič, M.; Bobák, A.
2010-10-01
By means of standard and histogram Monte Carlo simulations, we investigate the critical and compensation behaviour of a ternary mixed spin alloy of the type ABpC1- p on a cubic lattice. We focus on the case with the parameters corresponding to the Prussian blue analog (NipIIMn1-pII)1.5[CrIII(CN)6]·nH2O and confront our findings with those obtained by some approximative approaches and the experiments.
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruet, J
2007-06-23
This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directorymore » structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.« less
Benchmarking study of the MCNP code against cold critical experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, S.
1991-01-01
The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
NASA Astrophysics Data System (ADS)
César Mansur Filho, Júlio; Dickman, Ronald
2011-05-01
We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\
Vapor-liquid equilibrium and critical asymmetry of square well and short square well chain fluids.
Li, Liyan; Sun, Fangfang; Chen, Zhitong; Wang, Long; Cai, Jun
2014-08-07
The critical behavior of square well fluids with variable interaction ranges and of short square well chain fluids have been investigated by grand canonical ensemble Monte Carlo simulations. The critical temperatures and densities were estimated by a finite-size scaling analysis with the help of histogram reweighting technique. The vapor-liquid coexistence curve in the near-critical region was determined using hyper-parallel tempering Monte Carlo simulations. The simulation results for coexistence diameters show that the contribution of |t|(1-α) to the coexistence diameter dominates the singular behavior in all systems investigated. The contribution of |t|(2β) to the coexistence diameter is larger for the system with a smaller interaction range λ. While for short square well chain fluids, longer the chain length, larger the contribution of |t|(2β). The molecular configuration greatly influences the critical asymmetry: a short soft chain fluid shows weaker critical asymmetry than a stiff chain fluid with same chain length.
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
Physics of reactor safety. Quarterly report, January--March 1977. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1977-06-01
This report summarizes work done on reactor safety, Monte Carlo analysis of safety-related critical assembly experiments, and planning of DEMI safety-related critical experiments. Work on reactor core thermal-hydraulics is also included.
Dynamical critical exponent of the Jaynes-Cummings-Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohenadler, M.; Aichhorn, M.; Schmidt, S.
2011-10-15
An array of high-Q electromagnetic resonators coupled to qubits gives rise to the Jaynes-Cummings-Hubbard model describing a superfluid to Mott-insulator transition of lattice polaritons. From mean-field and strong-coupling expansions, the critical properties of the model are expected to be identical to the scalar Bose-Hubbard model. A recent Monte Carlo study of the superfluid density on the square lattice suggested that this does not hold for the fixed-density transition through the Mott lobe tip. Instead, mean-field behavior with a dynamical critical exponent z=2 was found. We perform large-scale quantum Monte Carlo simulations to investigate the critical behavior of the superfluid densitymore » and the compressibility. We find z=1 at the tip of the insulating lobe. Hence the transition falls in the three-dimensional XY universality class, analogous to the Bose-Hubbard model.« less
Nonequilibrium critical dynamics of the two-dimensional Ashkin-Teller model at the Baxter line
NASA Astrophysics Data System (ADS)
Fernandes, H. A.; da Silva, R.; Caparica, A. A.; de Felício, J. R. Drugowich
2017-04-01
We investigate the short-time universal behavior of the two-dimensional Ashkin-Teller model at the Baxter line by performing time-dependent Monte Carlo simulations. First, as preparatory results, we obtain the critical parameters by searching the optimal power-law decay of the magnetization. Thus, the dynamic critical exponents θm and θp, related to the magnetic and electric order parameters, as well as the persistence exponent θg, are estimated using heat-bath Monte Carlo simulations. In addition, we estimate the dynamic exponent z and the static critical exponents β and ν for both order parameters. We propose a refined method to estimate the static exponents that considers two different averages: one that combines an internal average using several seeds with another, which is taken over temporal variations in the power laws. Moreover, we also performed the bootstrapping method for a complementary analysis. Our results show that the ratio β /ν exhibits universal behavior along the critical line corroborating the conjecture for both magnetization and polarization.
NASA Astrophysics Data System (ADS)
Baek, Seung Ki; Um, Jaegon; Yi, Su Do; Kim, Beom Jun
2011-11-01
In a number of classical statistical-physical models, there exists a characteristic dimensionality called the upper critical dimension above which one observes the mean-field critical behavior. Instead of constructing high-dimensional lattices, however, one can also consider infinite-dimensional structures, and the question is whether this mean-field character extends to quantum-mechanical cases as well. We therefore investigate the transverse-field quantum Ising model on the globally coupled network and on the Watts-Strogatz small-world network by means of quantum Monte Carlo simulations and the finite-size scaling analysis. We confirm that both of the structures exhibit critical behavior consistent with the mean-field description. In particular, we show that the existing cumulant method has difficulty in estimating the correct dynamic critical exponent and suggest that an order parameter based on the quantum-mechanical expectation value can be a practically useful numerical observable to determine critical behavior when there is no well-defined dimensionality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianchini, G.; Burgio, N.; Carta, M.
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Severalmore » off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)« less
Virial coefficients and demixing in the Asakura-Oosawa model.
López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés; Yuste, Santos B; Fiumara, Giacomo; Saija, Franz
2015-01-07
The problem of demixing in the Asakura-Oosawa colloid-polymer model is considered. The critical constants are computed using truncated virial expansions up to fifth order. While the exact analytical results for the second and third virial coefficients are known for any size ratio, analytical results for the fourth virial coefficient are provided here, and fifth virial coefficients are obtained numerically for particular size ratios using standard Monte Carlo techniques. We have computed the critical constants by successively considering the truncated virial series up to the second, third, fourth, and fifth virial coefficients. The results for the critical colloid and (reservoir) polymer packing fractions are compared with those that follow from available Monte Carlo simulations in the grand canonical ensemble. Limitations and perspectives of this approach are pointed out.
ERIC Educational Resources Information Center
Kalkanis, G.; Sarris, M. M.
1999-01-01
Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…
Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.
Berry, K J; Mielke, P W
2000-12-01
Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.
Rai, Neeraj; Maginn, Edward J
2012-01-01
Atomistic Monte Carlo simulations are used to compute vapour-liquid coexistence properties of a homologous series of [C(n)mim][NTf2] ionic liquids, with n = 1, 2, 4, 6. Estimates of the critical temperatures range from 1190 K to 1257 K, with longer cation alkyl chains serving to lower the critical temperature. Other quantities such as critical density, critical pressure, normal boiling point, and accentric factor are determined from the simulations. Vapour pressure curves and the temperature dependence of the enthalpy of vapourisation are computed and found to have a weak dependence on the length of the cation alkyl chain. The ions in the vapour phase are predominately in single ion pairs, although a significant number of ions are found in neutral clusters of larger sizes as temperature is increased. It is found that previous estimates of the critical point obtained from extrapolating experimental surface tension data agree reasonably well with the predictions obtained here, but group contribution methods and primitive models of ionic liquids do not capture many of the trends observed in the present study
Geometrically Constructed Markov Chain Monte Carlo Study of Quantum Spin-phonon Complex Systems
NASA Astrophysics Data System (ADS)
Suwa, Hidemaro
2013-03-01
We have developed novel Monte Carlo methods for precisely calculating quantum spin-boson models and investigated the critical phenomena of the spin-Peierls systems. Three significant methods are presented. The first is a new optimization algorithm of the Markov chain transition kernel based on the geometric weight allocation. This algorithm, for the first time, satisfies the total balance generally without imposing the detailed balance and always minimizes the average rejection rate, being better than the Metropolis algorithm. The second is the extension of the worm (directed-loop) algorithm to non-conserved particles, which cannot be treated efficiently by the conventional methods. The third is the combination with the level spectroscopy. Proposing a new gap estimator, we are successful in eliminating the systematic error of the conventional moment method. Then we have elucidated the phase diagram and the universality class of the one-dimensional XXZ spin-Peierls system. The criticality is totally consistent with the J1 -J2 model, an effective model in the antiadiabatic limit. Through this research, we have succeeded in investigating the critical phenomena of the effectively frustrated quantum spin system by the quantum Monte Carlo method without the negative sign. JSPS Postdoctoral Fellow for Research Abroad
NASA Astrophysics Data System (ADS)
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J.
1983-01-01
The atmospheric radiative transfer calculation program (ATARD) and its supporting programs (setting up atmospheric profile, making Mie tables and an exponential-sum-fitting table) were completed. More sophisticated treatment of aerosol scattering (including angular phase function or asymmetric factor) and multichannel analysis of results from ATRAD are being developed. Some progress was made on a Monte Carlo program for examining two dimensional effects, specifically a surface boundary condition that varies across a scene. The MONTE program combines ATRAD and the Monte Carlo method together to produce an atmospheric point spread function. Currently the procedure passes monochromatic tests and the results are reasonable.
SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T. III
SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.
Reentrant behavior in the nearest-neighbor Ising antiferromagnet in a magnetic field
NASA Astrophysics Data System (ADS)
Neto, Minos A.; de Sousa, J. Ricardo
2004-12-01
Motived by the H-T phase diagram in the bcc Ising antiferromagnetic with nearest-neighbor interactions obtained by Monte Carlo simulation [Landau, Phys. Rev. B 16, 4164 (1977)] that shows a reentrant behavior at low temperature, with two critical temperatures in magnetic field about 2% greater than the critical value Hc=8J , we apply the effective field renormalization group (EFRG) approach in this model on three-dimensional lattices (simple cubic-sc and body centered cubic-bcc). We find that the critical curve TN(H) exhibits a maximum point around of H≃Hc only in the bcc lattice case. We also discuss the critical behavior by the effective field theory in clusters with one (EFT-1) and two (EFT-2) spins, and a reentrant behavior is observed for the sc and bcc lattices. We have compared our results of EFRG in the bcc lattice with Monte Carlo and series expansion, and we observe a good accordance between the methods.
Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...
2014-11-01
This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less
Monte Carlo simulation: Its status and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murtha, J.A.
1997-04-01
Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urbic, Tomaz, E-mail: tomaz.urbic@fkkt.uni-lj.si; Dias, Cristiano L.
The thermodynamic and structural properties of the planar soft-sites dumbbell fluid are examined by Monte Carlo simulations and integral equation theory. The dimers are built of two Lennard-Jones segments. Site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions for a range of elongations and densities and the results are compared with Monte Carlo simulations. The critical parameters for selected types of dimers were also estimated. We analyze the influence of the bond length on critical point as well as tested correctness of site-site integral equation theory with different closures. The integral equations canmore » be used to predict the phase diagram of dimers whose molecular parameters are known.« less
Summing Feynman graphs by Monte Carlo: Planar ϕ3-theory and dynamically triangulated random surfaces
NASA Astrophysics Data System (ADS)
Boulatov, D. V.; Kazakov, V. A.
1988-12-01
New combinatorial identities are suggested relating the ratio of (n - 1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γstr (string susceptibility) in planar ϕ3-theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D = 1 the exact critical properties of the theory are reproduced numerically. After August 3, 1988 the address will be: Cybernetics Council, Academy of Science, ul. Vavilova 40, 117333 Moscow, USSR.
Liu, Y; Zheng, Y
2012-06-01
Accurate determination of proton dosimetric effect for tissue heterogeneity is critical in proton therapy. Proton beams have finite range and consequently tissue heterogeneity plays a more critical role in proton therapy. The purpose of this study is to investigate the tissue heterogeneity effect in proton dosimetry based on anatomical-based Monte Carlo simulation using animal tissues. Animal tissues including a pig head and beef bulk were used in this study. Both pig head and beef were scanned using a GE CT scanner with 1.25 mm slice thickness. A treatment plan was created, using the CMS XiO treatment planning system (TPS) with a single proton spread-out-Bragg-peak beam (SOBP). Radiochromic films were placed at the distal falloff region. Image guidance was used to align the phantom before proton beams were delivered according to the treatment plan. The same two CT sets were converted to Monte Carlo simulation model. The Monte Carlo simulated dose calculations with/without tissue omposition were compared to TPS calculations and measurements. Based on the preliminary comparison, at the center of SOBP plane, the Monte Carlo simulation dose without tissue composition agreed generally well with TPS calculation. In the distal falloff region, the dose difference was large, and about 2 mm isodose line shift was observed with the consideration of tissue composition. The detailed comparison of dose distributions between Monte Carlo simulation, TPS calculations and measurements is underway. Accurate proton dose calculations are challenging in proton treatment planning for heterogeneous tissues. Tissue heterogeneity and tissue composition may lead to isodose line shifts up to a few millimeters in the distal falloff region. By simulating detailed particle transport and energy deposition, Monte Carlo simulations provide a verification method in proton dose calculation where inhomogeneous tissues are present. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
Monte Carlo simulation of biomolecular systems with BIOMCSIM
NASA Astrophysics Data System (ADS)
Kamberaj, H.; Helms, V.
2001-12-01
A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Can we approach the gas-liquid critical point using slab simulations of two coexisting phases?
Goujon, Florent; Ghoufi, Aziz; Malfreyt, Patrice; Tildesley, Dominic J
2016-09-28
In this paper, we demonstrate that it is possible to approach the gas-liquid critical point of the Lennard-Jones fluid by performing simulations in a slab geometry using a cut-off potential. In the slab simulation geometry, it is essential to apply an accurate tail correction to the potential energy, applied during the course of the simulation, to study the properties of states close to the critical point. Using the Janeček slab-based method developed for two-phase Monte Carlo simulations [J. Janec̆ek, J. Chem. Phys. 131, 6264 (2006)], the coexisting densities and surface tension in the critical region are reported as a function of the cutoff distance in the intermolecular potential. The results obtained using slab simulations are compared with those obtained using grand canonical Monte Carlo simulations of isotropic systems and the finite-size scaling techniques. There is a good agreement between these two approaches. The two-phase simulations can be used in approaching the critical point for temperatures up to 0.97 T C ∗ (T ∗ = 1.26). The critical-point exponents describing the dependence of the density, surface tension, and interfacial thickness on the temperature are calculated near the critical point.
NASA Astrophysics Data System (ADS)
Golonka, P.; Pierzchała, T.; Waş, Z.
2004-02-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events. Program summaryTitle of the program:MC-TESTER, version 1.1 Catalogue identifier: ADSM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAM Operating system: Linux Red Hat 6.1, 7.2, and also 8.0 Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77 Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB). No. of bytes in distributed program, including test data, etc.: 2 024 425 Distribution format: tar gzip file Additional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet). Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparison Nature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties. Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Restrictions on the complexity of the problem: For a list of limitations see Section 6. Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; ? processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines. Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.
NASA Astrophysics Data System (ADS)
Cochran, Thomas
2007-04-01
In 2002 and again in 2003, an investigative journalist unit at ABC News transported a 6.8 kilogram metallic slug of depleted uranium (DU) via shipping container from Istanbul, Turkey to Brooklyn, NY and from Jakarta, Indonesia to Long Beach, CA. Targeted inspection of these shipping containers by Department of Homeland Security (DHS) personnel, included the use of gamma-ray imaging, portal monitors and hand-held radiation detectors, did not uncover the hidden DU. Monte Carlo analysis of the gamma-ray intensity and spectrum of a DU slug and one consisting of highly-enriched uranium (HEU) showed that DU was a proper surrogate for testing the ability of DHS to detect the illicit transport of HEU. Our analysis using MCNP-5 illustrated the ease of fully shielding an HEU sample to avoid detection. The assembly of an Improvised Nuclear Device (IND) -- a crude atomic bomb -- from sub-critical pieces of HEU metal was then examined via Monte Carlo criticality calculations. Nuclear explosive yields of such an IND as a function of the speed of assembly of the sub-critical HEU components were derived. A comparison was made between the more rapid assembly of sub-critical pieces of HEU in the ``Little Boy'' (Hiroshima) weapon's gun barrel and gravity assembly (i.e., dropping one sub-critical piece of HEU on another from a specified height). Based on the difficulty of detection of HEU and the straightforward construction of an IND utilizing HEU, current U.S. government policy must be modified to more urgently prioritize elimination of and securing the global inventories of HEU.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less
Using Stan for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Au, Chi Hang
2018-01-01
Stan is a flexible probabilistic programming language providing full Bayesian inference through Hamiltonian Monte Carlo algorithms. The benefits of Hamiltonian Monte Carlo include improved efficiency and faster inference, when compared to other MCMC software implementations. Users can interface with Stan through a variety of computing…
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Hiratsuka, Tatsumasa; Tanaka, Hideki; Miyahara, Minoru T
2017-01-24
We find the rule of capillary condensation from the metastable state in nanoscale pores based on the transition state theory. The conventional thermodynamic theories cannot achieve it because the metastable capillary condensation inherently includes an activated process. We thus compute argon adsorption isotherms on cylindrical pore models and atomistic silica pore models mimicking the MCM-41 materials by the grand canonical Monte Carlo and the gauge cell Monte Carlo methods and evaluate the rate constant for the capillary condensation by the transition state theory. The results reveal that the rate drastically increases with a small increase in the chemical potential of the system, and the metastable capillary condensation occurs for any mesopores when the rate constant reaches a universal critical value. Furthermore, a careful comparison between experimental adsorption isotherms and the simulated ones on the atomistic silica pore models reveals that the rate constant of the real system also has a universal value. With this finding, we can successfully estimate the experimental capillary condensation pressure over a wide range of temperatures and pore sizes by simply applying the critical rate constant.
Gutzwiller Monte Carlo approach for a critical dissipative spin model
NASA Astrophysics Data System (ADS)
Casteels, Wim; Wilson, Ryan M.; Wouters, Michiel
2018-06-01
We use the Gutzwiller Monte Carlo approach to simulate the dissipative X Y Z model in the vicinity of a dissipative phase transition. This approach captures classical spatial correlations together with the full on-site quantum behavior while neglecting nonlocal quantum effects. By considering finite two-dimensional lattices of various sizes, we identify a ferromagnetic and two paramagnetic phases, in agreement with earlier studies. The greatly reduced numerical complexity of the Gutzwiller Monte Carlo approach facilitates efficient simulation of relatively large lattice sizes. The inclusion of the spatial correlations allows to capture parts of the phase diagram that are completely missed by the widely applied Gutzwiller decoupling of the density matrix.
Monte-Carlo simulations of the clean and disordered contact process in three space dimensions
NASA Astrophysics Data System (ADS)
Vojta, Thomas
2013-03-01
The absorbing-state transition in the three-dimensional contact process with and without quenched randomness is investigated by means of Monte-Carlo simulations. In the clean case, a reweighting technique is combined with a careful extrapolation of the data to infinite time to determine with high accuracy the critical behavior in the three-dimensional directed percolation universality class. In the presence of quenched spatial disorder, our data demonstrate that the absorbing-state transition is governed by an unconventional infinite-randomness critical point featuring activated dynamical scaling. The critical behavior of this transition does not depend on the disorder strength, i.e., it is universal. Close to the disordered critical point, the dynamics is characterized by the nonuniversal power laws typical of a Griffiths phase. We compare our findings to the results of other numerical methods, and we relate them to a general classification of phase transitions in disordered systems based on the rare region dimensionality. This work has been supported in part by the NSF under grants no. DMR-0906566 and DMR-1205803.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improvesmore » usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.« less
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
The Use of Monte Carlo Techniques to Teach Probability.
ERIC Educational Resources Information Center
Newell, G. J.; MacFarlane, J. D.
1985-01-01
Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Markovian Monte Carlo program EvolFMC v.2 for solving QCD evolution equations
NASA Astrophysics Data System (ADS)
Jadach, S.; Płaczek, W.; Skrzypek, M.; Stokłosa, P.
2010-02-01
We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods. New version program summaryProgram title: EvolFMC v.2 Catalogue identifier: AEFN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code) No. of bytes in distributed program, including test data, etc.: 412 752 Distribution format: tar.gz Programming language: C++ Computer: PC, Mac Operating system: Linux, Mac OS X RAM: Less than 256 MB Classification: 11.5 External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations. Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons. Restrictions:Limited to the case of massless partons. Implemented in the LO and NLO approximations only. Weighted events only. Unusual features: Modified-DGLAP evolutions included up to the NLO level. Additional comments: Technical precision established at 5×10. Running time: For the 10 6 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10.5.5, 2.4 GHz Intel Core 2 Duo, gcc 4.2.4, single thread).
Monte Carlo-based searching as a tool to study carbohydrate structure
USDA-ARS?s Scientific Manuscript database
A torsion angle-based Monte-Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various functions for evaluat...
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
Measurement of the main and critical parameters for optimal laser treatment of heart disease
NASA Astrophysics Data System (ADS)
Kabeya, FB; Abrahamse, H.; Karsten, AE
2017-10-01
Laser light is frequently used in the diagnosis and treatment of patients. As in traditional treatments such as medication, bypass surgery, and minimally invasive ways, laser treatment can also fail and present serious side effects. The true reason for laser treatment failure or the side effects thereof, remains unknown. From the literature review conducted, and experimental results generated we conclude that an optimal laser treatment for coronary artery disease (named heart disease) can be obtained if certain critical parameters are correctly measured and understood. These parameters include the laser power, the laser beam profile, the fluence rate, the treatment time, as well as the absorption and scattering coefficients of the target treatment tissue. Therefore, this paper proposes different, accurate methods for the measurement of these critical parameters to determine the optimal laser treatment of heart disease with a minimal risk of side effects. The results from the measurement of absorption and scattering properties can be used in a computer simulation package to predict the fluence rate. The computing technique is a program based on the random number (Monte Carlo) process and probability statistics to track the propagation of photons through a biological tissue.
PEPSI — a Monte Carlo generator for polarized leptoproduction
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Stephen M
2008-09-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VImore » in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of using SCALE/KENO-VI for criticality analyses; the SCALE/KENO-VI manual provides information on the use of SCALE/KENO-VI and all its modules. The primer also contains an appendix with sample input files.« less
Hybrid Defect Phase Transition: Renormalization Group and Monte Carlo Analysis
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Diep, H. T.
2010-03-01
For the q-state Potts model with 2 < q <= 4 on the square lattice with a defect line, the order parameter on the defect line jumps discontinuously from zero to a nonzero value while the defect energy varies continuously with the temperature at the critical temperature. Monte-Carlo simulations (H. T. Diep, M. Kaufman, Phys Rev E 2009) of the q-state Potts model on a square lattice with a line of defects verify the renormalization group prediction (M. Kaufman, R. B. Griffiths, Phys Rev B 1982) on the occurrence of the hybrid transition on the defect line. This is interesting since for those q values the bulk transition is continuous. This hybrid (continuous - discontinuous) defect transition is induced by the infinite range correlations at the bulk critical point.
NASA Astrophysics Data System (ADS)
Cheon, M.; Chang, I.
1999-04-01
The scaling behavior for a binary fragmentation of critical percolation clusters is investigated by a large-cell Monte Carlo real-space renormalization group method in two and three dimensions. We obtain accurate values of critical exponents λ and phi describing the scaling of fragmentation rate and the distribution of fragments' masses produced by a binary fragmentation. Our results for λ and phi show that the fragmentation rate is proportional to the size of mother cluster, and the scaling relation σ = 1 + λ - phi conjectured by Edwards et al. to be valid for all dimensions is satisfied in two and three dimensions, where σ is the crossover exponent of the average cluster number in percolation theory, which excludes the other scaling relations.
Monte Carlo study of the honeycomb structure of anthraquinone molecules on Cu(111)
NASA Astrophysics Data System (ADS)
Kim, Kwangmoo; Einstein, T. L.
2011-06-01
Using Monte Carlo calculations of the two-dimensional (2D) triangular lattice gas model, we demonstrate a mechanism for the spontaneous formation of honeycomb structure of anthraquinone (AQ) molecules on a Cu(111) plane. In our model long-range attractions play an important role, in addition to the long-range repulsions and short-range attractions proposed by Pawin, Wong, Kwon, and Bartels [ScienceSCIEAS0036-807510.1126/science.1129309 313, 961 (2006)]. We provide a global account of the possible combinations of long-range attractive coupling constants which lead to a honeycomb superstructure. We also provide the critical temperature of disruption of the honeycomb structure and compare the critical local coverage rate of AQ’s where the honeycomb structure starts to form with the experimental observations.
Vojta, Thomas; Igo, John; Hoyos, José A
2014-07-01
We investigate the nonequilibrium phase transition of the disordered contact process in five space dimensions by means of optimal fluctuation theory and Monte Carlo simulations. We find that the critical behavior is of mean-field type, i.e., identical to that of the clean five-dimensional contact process. It is accompanied by off-critical power-law Griffiths singularities whose dynamical exponent z' saturates at a finite value as the transition is approached. These findings resolve the apparent contradiction between the Harris criterion, which implies that weak disorder is renormalization-group irrelevant, and the rare-region classification, which predicts unconventional behavior. We confirm and illustrate our theory by large-scale Monte Carlo simulations of systems with up to 70(5) sites. We also relate our results to a recently established general relation between the Harris criterion and Griffiths singularities [Phys. Rev. Lett. 112, 075702 (2014)], and we discuss implications for other phase transitions.
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...
2017-06-01
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
You Don't Have to Be: Feminist Literary Criticism in the High School.
ERIC Educational Resources Information Center
Willinsky, John
Feminist literary criticism seems to have the potential to bring new life to old standards taught in the high school English class even if the students are not themselves feminists. A feminist approach to literature instruction was first attempted using "This Is Just to Say" and "The Young Housewife" by William Carlos Williams…
Hemingway; A Collection of Critical Essays. Twentieth Century Views Series.
ERIC Educational Resources Information Center
Weeks, Robert P., Ed.
One of a series of works aimed at presenting contemporary critical opinion on major authors, this collection includes essays by Lillian Ross, Malcolm Crowley, E.M. Halliday, Harry Levin, Leslie Fiedler, D.H. Lawrence, Philip Young, Sean O'Faolain, Cleanth Brooks and Robert Penn Warren, Carlos Baker, Mark Spilka, Ray B. West, Jr., Nemi D'Agostino,…
Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates
NASA Astrophysics Data System (ADS)
Ashton, G.; Prix, R.
2018-05-01
Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.
Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Toncheva, Greta; Yoshizumi, Terry T.; Frush, Donald P.
2011-01-01
Purpose: Radiation-dose awareness and optimization in CT can greatly benefit from a dose-reporting system that provides dose and risk estimates specific to each patient and each CT examination. As the first step toward patient-specific dose and risk estimation, this article aimed to develop a method for accurately assessing radiation dose from CT examinations. Methods: A Monte Carlo program was developed to model a CT system (LightSpeed VCT, GE Healthcare). The geometry of the system, the energy spectra of the x-ray source, the three-dimensional geometry of the bowtie filters, and the trajectories of source motions during axial and helical scans were explicitly modeled. To validate the accuracy of the program, a cylindrical phantom was built to enable dose measurements at seven different radial distances from its central axis. Simulated radial dose distributions in the cylindrical phantom were validated against ion chamber measurements for single axial scans at all combinations of tube potential and bowtie filter settings. The accuracy of the program was further validated using two anthropomorphic phantoms (a pediatric one-year-old phantom and an adult female phantom). Computer models of the two phantoms were created based on their CT data and were voxelized for input into the Monte Carlo program. Simulated dose at various organ locations was compared against measurements made with thermoluminescent dosimetry chips for both single axial and helical scans. Results: For the cylindrical phantom, simulations differed from measurements by −4.8% to 2.2%. For the two anthropomorphic phantoms, the discrepancies between simulations and measurements ranged between (−8.1%, 8.1%) and (−17.2%, 13.0%) for the single axial scans and the helical scans, respectively. Conclusions: The authors developed an accurate Monte Carlo program for assessing radiation dose from CT examinations. When combined with computer models of actual patients, the program can provide accurate dose estimates for specific patients. PMID:21361208
Planning for connections in the long-term in Patagonia
Amy T. Austin
2009-01-01
Establishing a long-term ecological research program and research collaborations in northwestern Patagonia. A workshop in San Carlos de Bariloche, Argentina, January 2009. The relict flora of Gondwanda, the mystic nature of the windswept Patagonian steppe, the Andes mountains and the southern beech forests, all combined, made San Carlos de Bariloche the perfect setting...
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Recent advances and future prospects for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
Ezra Pound: A Collection of Critical Essays. Twentieth Century Views Series.
ERIC Educational Resources Information Center
Sutton, Walter, Ed.
One of a series of works aimed at presenting contemporary critical opinion on major authors, this collection includes essays by Walter Sutton, William Butler Yeats, William Carlos Williams, T. S. Eliot, F. R. Leavis, Hugh Kenner, M. L. Rosenthal, Forrest Read, David W. Evans, W. M. Frohock, Harold H. Watts, Earl Miner, Murray Schafer, J. P.…
San Carlos Apache Tribe - Energy Organizational Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. An intern program. Staff training. Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribalmore » energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.« less
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
APS undulator and wiggler sources: Monte-Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, S.L.; Lai, B.; Viccaro, P.J.
1992-02-01
Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning
2018-05-16
QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.
Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Ritsch, E.; Atlas Collaboration
2014-06-01
The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.
Analysis of benchmark critical experiments with ENDF/B-VI data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J. Jr.; Kahler, A.C.
1991-12-31
Several clean critical experiments were analyzed with ENDF/B-VI data to assess the adequacy of the data for U{sup 235}, U{sup 238} and oxygen. These experiments were (1) a set of homogeneous U{sup 235}-H{sub 2}O assemblies spanning a wide range of hydrogen/uranium ratio, and (2) TRX-1, a simple, H{sub 2}O-moderated Bettis lattice of slightly-enriched uranium metal rods. The analyses used the Monte Carlo program RCP01, with explicit three-dimensional geometry and detailed representation of cross sections. For the homogeneous criticals, calculated k{sub crit} values for large, thermal assemblies show good agreement with experiment. This supports the evaluated thermal criticality parameters for U{supmore » 235}. However, for assemblies with smaller H/U ratios, k{sub crit} values increase significantly with increasing leakage and flux-spectrum hardness. These trends suggest that leakage is underpredicted and that the resonance eta of the ENDF/B-VI U{sup 235} is too large. For TRX-1, reasonably good agreement is found with measured lattice parameters (reaction-rate ratios). Of primary interest is rho28, the ratio of above-thermal to thermal U{sup 238} capture. Calculated rho28 is 2.3 ({+-} 1.7) % above measurement, suggesting that U{sup 238} resonance capture remains slightly overpredicted with ENDF/B-VI. However, agreement is better than observed with earlier versions of ENDF/B.« less
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Constraints on Biogenic Emplacement of Crystalline Calcium Carbonate and Dolomite
NASA Astrophysics Data System (ADS)
Colas, B.; Clark, S. M.; Jacob, D. E.
2015-12-01
Amorphous calcium carbonate (ACC) is a biogenic precursor of calcium carbonates forming shells and skeletons of marine organisms, which are key components of the whole marine environment. Understanding carbonate formation is an essential prerequisite to quantify the effect climate change and pollution have on marine population. Water is a critical component of the structure of ACC and the key component controlling the stability of the amorphous state. Addition of small amounts of magnesium (1-5% of the calcium content) is known to promote the stability of ACC presumably through stabilization of the hydrogen bonding network. Understanding the hydrogen bonding network in ACC is fundamental to understand the stability of ACC. Our approach is to use Monte-Carlo simulations constrained by X-ray and neutron scattering data to determine hydrogen bonding networks in ACC as a function of magnesium doping. We have already successfully developed a synthesis protocol to make ACC, and have collected X-ray data, which is suitable for determining Ca, Mg and O correlations, and have collected neutron data, which gives information on the hydrogen/deuterium (as the interaction of X-rays with hydrogen is too low for us to be able to constrain hydrogen atom positions with only X-rays). The X-ray and neutron data are used to constrain reverse Monte-Carlo modelling of the ACC structure using the Empirical Potential Structure Refinement program, in order to yield a complete structural model for ACC including water molecule positions. We will present details of our sample synthesis and characterization methods, X-ray and neutron scattering data, and reverse Monte-Carlo simulations results, together with a discussion of the role of hydrogen bonding in ACC stability.
NASA Astrophysics Data System (ADS)
Wang, Xiaoyu; Schattner, Yoni; Berg, Erez; Fernandes, Rafael
The maximum transition temperature Tc observed in the phase diagrams of several unconventional superconductors takes place in the vicinity of a putative antiferromagnetic quantum critical point. This observation motivated the theoretical proposal that superconductivity in these systems may be driven by quantum critical fluctuations, which in turn can also promote non-Fermi liquid behavior. In this talk, we present a combined analytical and sign-problem-free Quantum Monte Carlo investigation of the spin-fermion model - a widely studied low-energy model for the interplay between superconductivity and magnetic fluctuations. By engineering a series of band dispersions that interpolate between near-nested and open Fermi surfaces, and by also varying the strength of the spin-fermion interaction, we find that the hot spots of the Fermi surface provide the dominant contribution to the pairing instability in this model. We show that the analytical expressions for Tc and for the pairing susceptibility, obtained within a large-N Eliashberg approximation to the spin-fermion model, agree well with the Quantum Monte Carlo data, even in the regime of interactions comparable to the electronic bandwidth. DE-SC0012336.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
NASA Astrophysics Data System (ADS)
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Billman, C.; Harding, A. K.
2013-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
jTracker and Monte Carlo Comparison
NASA Astrophysics Data System (ADS)
Selensky, Lauren; SeaQuest/E906 Collaboration
2015-10-01
SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Surface critical behavior of thin Ising films at the ‘special point’
NASA Astrophysics Data System (ADS)
Moussa, Najem; Bekhechi, Smaine
2003-03-01
The critical surface phenomena of a magnetic thin Ising film is studied using numerical Monte-Carlo method based on Wolff cluster algorithm. With varying the surface coupling, js= Js/ J, the phase diagram exhibits a special surface coupling jsp at which all the films have a unique critical temperature Tc for an arbitrary thickness n. In spite of this, the critical exponent of the surface magnetization at the special point is found to increase with n. Moreover, non-universal features as well as dimensionality crossover from two- to three-dimensional behavior are found at this point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonathan A. Webb; Indrajit Charit
2011-08-01
The critical mass and dimensions of simple geometries containing highly enriched uraniumdioxide (UO2) and uraniummononitride (UN) encapsulated in tungsten-rhenium alloys are determined using MCNP5 criticality calculations. Spheres as well as cylinders with length to radius ratios of 1.82 are computationally built to consist of 60 vol.% fuel and 40 vol.% metal matrix. Within the geometries the uranium is enriched to 93 wt.% uranium-235 and the rhenium content within the metal alloy was modeled over a range of 0 to 30 at.%. The spheres containing UO2 were determined to have a critical radius of 18.29 cm to 19.11 cm and amore » critical mass ranging from 366 kg to 424 kg. The cylinders containing UO2 were found to have a critical radius ranging from 17.07 cm to 17.844 cm with a corresponding critical mass of 406 kg to 471 kg. Spheres engrained with UN were determined to have a critical radius ranging from 14.82 cm to 15.19 cm and a critical mass between 222 kg and 242 kg. Cylinders which were engrained with UN were determined to have a critical radius ranging from 13.811 cm to 14.155 cm with a corresponding critical mass of 245 kg to 267 kg. The critical geometries were also computationally submerged in a neutronaically infinite medium of fresh water to determine the effects of rhenium addition on criticality accidents due to water submersion. The monte carlo analysis demonstrated that rhenium addition of up to 30 at.% can reduce the excess reactivity due to water submersion by up to $5.07 for UO2 fueled cylinders, $3.87 for UO2 fueled spheres and approximately $3.00 for UN fueled spheres and cylinders.« less
Upgrade of Irradiation Test Capability of the Experimental Fast Reactor Joyo
NASA Astrophysics Data System (ADS)
Sekine, Takashi; Aoyama, Takafumi; Suzuki, Soju; Yamashita, Yoshioki
2003-06-01
The JOYO MK-II core was operated from 1983 to 2000 as fast neutron irradiation bed. In order to meet various requirements for irradiation tests for development of FBRs, the JOYO upgrading project named MK-III program was initiated. The irradiation capability in the MK-III core will be about four times larger than that of the MK-II core. Advanced irradiation test subassemblies such as capsule type subassembly and on-line instrumentation rig are planned. As an innovative reactor safety system, the irradiation test of Self-Actuated Shutdown System (SASS) will be conducted. In order to improve the accuracy of neutron fluence, the core management code system was upgraded, and the Monte Carlo code and Helium Accumulation Fluence Monitor (HAFM) were applied. The MK-III core is planned to achieve initial criticality in July 2003.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...
2016-05-10
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
EMAM, M; Eldib, A; Lin, M
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systemsmore » (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.« less
McStas 1.1: a tool for building neutron Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.
2000-03-01
McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The GIBS software program is a Grand Canonical Monte Carlo (GCMC) simulation program (written in C++) that can be used for 1) computing the excess chemical potential of ions and the mean activity coefficients of salts in homogeneous electrolyte solutions; and, 2) for computing the distribution of ions around fixed macromolecules such as, nucleic acids and proteins. The solvent can be represented as neutral hard spheres or as a dielectric continuum. The ions are represented as charged hard spheres that can interact via Coulomb, hard-sphere, or Lennard-Jones potentials. In addition to hard-sphere repulsions, the ions can also be made tomore » interact with the solvent hard spheres via short-ranged attractive square-well potentials.« less
Capabilities overview of the MORET 5 Monte Carlo code
NASA Astrophysics Data System (ADS)
Cochet, B.; Jinaphanh, A.; Heulers, L.; Jacquet, O.
2014-06-01
The MORET code is a simulation tool that solves the transport equation for neutrons using the Monte Carlo method. It allows users to model complex three-dimensional geometrical configurations, describe the materials, define their own tallies in order to analyse the results. The MORET code has been initially designed to perform calculations for criticality safety assessments. New features has been introduced in the MORET 5 code to expand its use for reactor applications. This paper presents an overview of the MORET 5 code capabilities, going through the description of materials, the geometry modelling, the transport simulation and the definition of the outputs.
An information-carrying and knowledge-producing molecular machine. A Monte-Carlo simulation.
Kuhn, Christoph
2012-02-01
The concept called Knowledge is a measure of the quality of genetically transferred information. Its usefulness is demonstrated quantitatively in a Monte-Carlo simulation on critical steps in a origin of life model. The model describes the origin of a bio-like genetic apparatus by a long sequence of physical-chemical steps: it starts with the presence of a self-replicating oligomer and a specifically structured environment in time and space that allow for the formation of aggregates such as assembler-hairpins-devices and, at a later stage, an assembler-hairpins-enzyme device-a first translation machine.
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...
2017-05-01
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Fast quantum Monte Carlo on a GPU
NASA Astrophysics Data System (ADS)
Lutsyshyn, Y.
2015-02-01
We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.
Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School
NASA Astrophysics Data System (ADS)
Lafage, Vincent
2017-11-01
Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
NASA Astrophysics Data System (ADS)
Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.
2017-02-01
Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.
A Comparison of Electrolytic Capacitors and Supercapacitors for Piezo-Based Energy Harvesting
2013-07-01
A Comparison of Electrolytic Capacitors and Supercapacitors for Piezo-Based Energy Harvesting by Matthew H. Ervin, Carlos M. Pereira, John R...Capacitors and Supercapacitors for Piezo-Based Energy Harvesting Matthew H. Ervin Sensors and Electronic Devices Directorate, ARL Carlos M. Pereira... Supercapacitors for Piezo-Based Energy Harvesting 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Matthew H
NASA Astrophysics Data System (ADS)
Bouachraoui, Rachid; El Hachimi, Abdel Ghafour; Ziat, Younes; Bahmad, Lahoucine; Tahiri, Najim
2018-06-01
Electronic and magnetic properties of hexagonal Iron (II) Sulfide (hexagonal FeS) have been investigated by combining the Density functional theory (DFT) and Monte Carlo simulations (MCS). This compound is constituted by magnetic hexagonal lattice occupied by Fe2+ with spin state (S = 2). Based on ab initio method, we calculated the exchange coupling JFe-Fe between two magnetic atoms Fe-Fe in different directions. Also phase transitions, magnetic stability and magnetizations have been investigated in the framework of Monte Carlo simulations. Within this method, a second phase transition is observed at the Néel temperature TN = 450 K. This finding in good agreement with the reported data in the literature. The effect of the applied different parameters showed how can these parameters affect the critical temperature of this system. Moreover, we studied the density of states and found that the hexagonal FeS will be a promoting material for spintronic applications.
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
Electrosorption of a modified electrode in the vicinity of phase transition: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Gavilán Arriazu, E. M.; Pinto, O. A.
2018-03-01
We present a Monte Carlo study for the electrosorption of an electroactive species on a modified electrode. The surface of the electrode is modified by the irreversible adsorption of a non-electroactive species which is able to block a percentage of the adsorption sites. This generates an electrode with variable connectivity sites. A second species, electroactive in this case, is adsorbed in surface vacancies and can interact repulsively with itself. In particular, we are interested in the analysis of the effect of the non-electroactive species near of critical regime, where the c(2 × 2) structure is formed. Lattice-gas models and Monte Carlo simulations in the Gran Canonical Ensemble are used. The analysis conducted is based on the study of voltammograms, order parameters, isotherms, configurational entropy per site, at several values of energies and coverage degrees of the non-electroactive species.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less
Monte Carlo Simulation of THz Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Blakey, P.
1997-01-01
Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.
Decrepitation and crack healing of fluid inclusions in San Carlos olivine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wanamaker, B.J.; Wong, Tengfong; Evans, B.
1990-09-10
Fluid inclusions break, or decrepitate, when the fluid pressure exceeds the least principal lithostatic stress by a critical amount. After decrepitation, excess fluid pressure is relaxed, resulting in crack arrest; subsequently, crack healing may occur. The authors developed a linear elastic fracture mechanics model to analyze new data on decrepitation and crack arrest in San Carlos Olivine, compared the model with previous fluid inclusion investigations, and used it to interpret some natural decrepitation microstructures. The common experimental observation that smaller inclusions may sustain higher internal fluid pressures without decrepitating may be rationalized by assuming that flaws associated with the inclusionmore » scale with the inclusion size. According to the model, the length of the crack formed by decrepitation depends on the lithostatic pressure at the initiation of cracking, the initial sizes of the flaw and the inclusion, and the critical stress intensity factor. Further experiments show that microcracks in San Carlos olivine heal within several days at 1,280 to 1,400{degree}C; healing rates depend on the crack geometry, temperature, and chemistry of the buffering gas. The regression distance of the crack tip during healing can be related to time through a power law with exponent n = 0.6. Chemical changes which become apparent after extremely long heat-treatments significantly affect the healing rates. Many of the inclusions in the San Carlos xenoliths stretched, decrepitated, and finally healed during uplift. The crack arrest model indicates that completely healed cracks had an initial fluid pressure of the order of 1 GPa. Using the crack arrest model and the healing kinetics, they estimate the ascent rate of these xenoliths to be between 0.001 and 0.1 m/s.« less
LCG MCDB—a knowledgebase of Monte-Carlo simulated events
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.
2008-02-01
In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.
Trade Space Analysis: Rotational Analyst Research Project
2015-09-01
POM Program Objective Memoranda PM Program Manager RFP Request for Proposal ROM Rough Order Magnitude RSM Response Surface Method RSE ...response surface method (RSM) / response surface equations ( RSEs ) as surrogate models. It uses the RSEs with Monte Carlo simulation to quantitatively
PCDAQ, A Windows Based DAQ System
NASA Astrophysics Data System (ADS)
Hogan, Gary
1998-10-01
PCDAQ is a Windows NT based general DAQ/Analysis/Monte Carlo shell developed as part of the Proton Radiography project at LANL (Los Alamos National Laboratory). It has been adopted by experiments outside of the Proton Radiography project at Brookhaven National Laboratory (BNL) and at LANL. The program provides DAQ, Monte Carlo, and replay (disk file input) modes. Data can be read from hardware (CAMAC) or other programs (ActiveX servers). Future versions will read VME. User supplied data analysis routines can be written in Fortran, C++, or Visual Basic. Histogramming, testing, and plotting packages are provided. Histogram data can be exported to spreadsheets or analyzed in user supplied programs. Plots can be copied and pasted as bitmap objects into other Windows programs or printed. A text database keyed by the run number is provided. Extensive software control flags are provided so that the user can control the flow of data through the program. Control flags can be set either in script command files or interactively. The program can be remotely controlled and data accessed over the Internet through its ActiveX DCOM interface.
NASA Technical Reports Server (NTRS)
1982-01-01
A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
Monte Carlo tests of the ELIPGRID-PC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less
NASA Technical Reports Server (NTRS)
Hazelrigg, G. A., Jr.
1976-01-01
A variety of economic and programmatic issues are discussed concerning the development and deployment of a fleet of space-based solar power satellites (SSPS). The costs, uncertainties and risks associated with the current photovoltaic SSPS configuration, and with issues affecting the development of an economically viable SSPS development program are analyzed. The desirability of a low earth orbit (LEO) demonstration satellite and a geosynchronous (GEO) pilot satellite is examined and critical technology areas are identified. In addition, a preliminary examination of utility interface issues is reported. The main focus of the effort reported is the development of SSPS unit production, and operation and maintenance cost models suitable for incorporation into a risk assessment (Monte Carlo) model (RAM). It is shown that the key technology area deals with the productivity of man in space, not, as might be expected, with some hardware component technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamb, J; Lee, C; Tee, S
2014-06-15
Purpose: To investigate the accuracy of 4D dose accumulation using projection of dose calculated on the end-exhalation, mid-ventilation, or average intensity breathing phase CT scan, versus dose accumulation performed using full Monte Carlo dose recalculation on every breathing phase. Methods: Radiotherapy plans were analyzed for 10 patients with stage I-II lung cancer planned using 4D-CT. SBRT plans were optimized using the dose calculated by a commercially-available Monte Carlo algorithm on the end-exhalation 4D-CT phase. 4D dose accumulations using deformable registration were performed with a commercially available tool that projected the planned dose onto every breathing phase without recalculation, as wellmore » as with a Monte Carlo recalculation of the dose on all breathing phases. The 3D planned dose (3D-EX), the 3D dose calculated on the average intensity image (3D-AVE), and the 4D accumulations of the dose calculated on the end-exhalation phase CT (4D-PR-EX), the mid-ventilation phase CT (4D-PR-MID), and the average intensity image (4D-PR-AVE), respectively, were compared against the accumulation of the Monte Carlo dose recalculated on every phase. Plan evaluation metrics relating to target volumes and critical structures relevant for lung SBRT were analyzed. Results: Plan evaluation metrics tabulated using 4D-PR-EX, 4D-PR-MID, and 4D-PR-AVE differed from those tabulated using Monte Carlo recalculation on every phase by an average of 0.14±0.70 Gy, - 0.11±0.51 Gy, and 0.00±0.62 Gy, respectively. Deviations of between 8 and 13 Gy were observed between the 4D-MC calculations and both 3D methods for the proximal bronchial trees of 3 patients. Conclusions: 4D dose accumulation using projection without re-calculation may be sufficiently accurate compared to 4D dose accumulated from Monte Carlo recalculation on every phase, depending on institutional protocols. Use of 4D dose accumulation should be considered when evaluating normal tissue complication probabilities as well as in clinical situations where target volumes are directly inferior to mobile critical structures.« less
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Space-based solar power conversion and delivery systems study. Volume 5: Economic analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Space-based solar power conversion and delivery systems are studied along with a variety of economic and programmatic issues relevant to their development and deployment. The costs, uncertainties and risks associated with the current photovoltaic Satellite Solar Power System (SSPS) configuration, and issues affecting the development of an economically viable SSPS development program are addressed. In particular, the desirability of low earth orbit (LEO) and geosynchronous (GEO) test satellites is examined and critical technology areas are identified. The development of SSPS unit production (nth item), and operation and maintenance cost models suitable for incorporation into a risk assessment (Monte Carlo) model (RAM) are reported. The RAM was then used to evaluate the current SSPS configuration expected costs and cost-risk associated with this configuration. By examining differential costs and cost-risk as a function of postulated technology developments, the critical technologies, that is, those which drive costs and/or cost-risk, are identified. It is shown that the key technology area deals with productivity in space, that is, the ability to fabricate and assemble large structures in space, not, as might be expected, with some hardware component technology.
NASA Astrophysics Data System (ADS)
Kolesik, Miroslav; Suzuki, Masuo
1995-02-01
The antiferromagnetic three-state Potts model on the simple-cubic lattice is studied using the coherent-anomaly method (CAM). The CAM analysis provides the estimates for the critical exponents which indicate the XY universality class, namely α = -0.011, β = 0.351, γ = 1.309 and δ = 4.73. This observation corroborates the results of the recent Monte Carlo simulations, and disagrees with the proposal of a new universality class.
Testing algorithms for critical slowing down
NASA Astrophysics Data System (ADS)
Cossu, Guido; Boyle, Peter; Christ, Norman; Jung, Chulwoo; Jüttner, Andreas; Sanfilippo, Francesco
2018-03-01
We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC) algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rikvold, Per Arne; Brown, Gregory; Miyashita, Seiji
Phase diagrams and hysteresis loops were obtained by Monte Carlo simulations and a mean- field method for a simplified model of a spin-crossovermaterialwith a two-step transition between the high-spin and low-spin states. This model is a mapping onto a square-lattice S = 1/2 Ising model with antiferromagnetic nearest-neighbor and ferromagnetic Husimi-Temperley ( equivalent-neighbor) long-range interactions. Phase diagrams obtained by the two methods for weak and strong long-range interactions are found to be similar. However, for intermediate-strength long-range interactions, the Monte Carlo simulations show that tricritical points decompose into pairs of critical end points and mean-field critical points surrounded by horn-shapedmore » regions of metastability. Hysteresis loops along paths traversing the horn regions are strongly reminiscent of thermal two-step transition loops with hysteresis, recently observed experimentally in several spin-crossover materials. As a result, we believe analogous phenomena should be observable in experiments and simulations for many systems that exhibit competition between local antiferromagnetic-like interactions and long-range ferromagnetic-like interactions caused by elastic distortions.« less
Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity
NASA Astrophysics Data System (ADS)
Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.
2018-07-01
The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.
Properties of the two-dimensional heterogeneous Lennard-Jones dimers: An integral equation study
Urbic, Tomaz
2016-01-01
Structural and thermodynamic properties of a planar heterogeneous soft dumbbell fluid are examined using Monte Carlo simulations and integral equation theory. Lennard-Jones particles of different sizes are the building blocks of the dimers. The site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions and the thermodynamic properties. Obtained results are compared to Monte Carlo simulation data. The critical parameters for selected types of dimers were also estimated and the influence of the Lennard-Jones parameters was studied. We have also tested the correctness of the site-site integral equation theory using different closures. PMID:27875894
Granato, Enzo
2008-07-11
Phase coherence and vortex order in a Josephson-junction array at irrational frustration are studied by extensive Monte Carlo simulations using the parallel-tempering method. A scaling analysis of the correlation length of phase variables in the full equilibrated system shows that the critical temperature vanishes with a power-law divergent correlation length and critical exponent nuph, in agreement with recent results from resistivity scaling analysis. A similar scaling analysis for vortex variables reveals a different critical exponent nuv, suggesting that there are two distinct correlation lengths associated with a decoupled zero-temperature phase transition.
Equilibrium and nonequilibrium models on Solomon networks
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2016-05-01
We investigate the critical properties of the equilibrium and nonequilibrium systems on Solomon networks. The equilibrium and nonequilibrium systems studied here are the Ising and Majority-vote models, respectively. These systems are simulated by applying the Monte Carlo method. We calculate the critical points, as well as the critical exponents ratio γ/ν, β/ν and 1/ν. We find that both systems present identical exponents on Solomon networks and are of different universality class as the regular two-dimensional ferromagnetic model. Our results are in agreement with the Grinstein criterion for models with up and down symmetry on regular lattices.
Critical temperature of the Ising ferromagnet on the fcc, hcp, and dhcp lattices
NASA Astrophysics Data System (ADS)
Yu, Unjong
2015-02-01
By an extensive Monte-Carlo calculation together with the finite-size-scaling and the multiple histogram method, the critical coupling constant (Kc = J /kBTc) of the Ising ferromagnet on the fcc, hcp, and double hcp (dhcp) lattices were obtained with unprecedented precision: Kcfcc= 0.1020707(2) , Kchcp= 0.1020702(1) , and Kcdhcp= 0.1020706(2) . The critical temperature Tc of the hcp lattice is found to be higher than those of the fcc and the dhcp lattice. The dhcp lattice seems to have higher Tc than the fcc lattice, but the difference is within error bars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirlepesov, F.; Shin, J.; Moskvin, V. P.
Purpose: Dose weighted Linear Energy Transfer (LETd) analysis of critical structures may be useful in understanding the side effects of the proton therapy. The objective is to analyze the differences between LETd and dose distributions in brain tumor patients receiving double scattering proton therapy, to quantify LETd variation in critical organs, and to identify beam arrangements contributing to high LETd in critical organs. Methods: Monte Carlo simulations of 9 pediatric brain tumor patients were performed. The treatment plans were reconstructed with the TOPAS Monte Carlo code to calculate LETd and dose. The beam data were reconstructed proximal to the aperturemore » of the double scattering nozzle. The dose and LETd to target and critical organs including brain stem, optic chiasm, lens, optic nerve, pituitary gland, and hypothalamus were computed for each beam. Results: Greater variability in LETd compared to dose was observed in the brainstem for patients with a variety of tumor types including 5 patients with tumors located in the posterior fossa. Approximately 20%–44% brainstem volume received LETd of 5kev/µm or greater from beams within gantry angles 180°±30° for 5 patients treated with a 3 beam arrangement. Critical organs received higher LETd when located in the vicinity of the beam distal edge. Conclusion: This study presents a novel strategy in the evaluation of the proton treatment impact on critical organs. While the dose to critical organs is confined below the required limits, the LETd may have significant variation. Critical organs in the vicinity of beam distal edge receive higher LETd and depended on beam arrangement, e.g. in posterior fossa tumor treatment, brainstem receive higher LETd from posterior-anterior beams. This study shows importance of the LETd analysis of the radiation impact on the critical organs in proton therapy and may be used to explain clinical imaging observations after therapy.« less
Gray: a ray tracing-based Monte Carlo simulator for PET.
Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S
2018-05-21
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
A New Approach to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics
NASA Astrophysics Data System (ADS)
Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger
2017-09-01
Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
Sherbini, S; Tamasanis, D; Sykes, J; Porter, S W
1986-12-01
A program was developed to calculate the exposure rate resulting from airborne gases inside a reactor containment building. The calculations were performed at the location of a wall-mounted area radiation monitor. The program uses Monte Carlo techniques and accounts for both the direct and scattered components of the radiation field at the detector. The scattered component was found to contribute about 30% of the total exposure rate at 50 keV and dropped to about 7% at 2000 keV. The results of the calculations were normalized to unit activity per unit volume of air in the containment. This allows the exposure rate readings of the area monitor to be used to estimate the airborne activity in containment in the early phases of an accident. Such estimates, coupled with containment leak rates, provide a method to obtain a release rate for use in offsite dose projection calculations.
Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)
NASA Technical Reports Server (NTRS)
Warman, E. A.; Lindsey, B. A.
1972-01-01
The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.
NASA Astrophysics Data System (ADS)
Garcia-Adeva, Angel J.; Huber, David L.
2001-07-01
In this work we generalize and subsequently apply the effective-field renormalization-group (EFRG) technique to the problem of ferro- and antiferromagnetically coupled Ising spins with local anisotropy axes in geometrically frustrated geometries (kagomé and pyrochlore lattices). In this framework, we calculate the various ground states of these systems and the corresponding critical points. Excellent agreement is found with exact and Monte Carlo results. The effects of frustration are discussed. As pointed out by other authors, it turns out that the spin-ice model can be exactly mapped to the standard Ising model, but with effective interactions of the opposite sign to those in the original Hamiltonian. Therefore, the ferromagnetic spin ice is frustrated and does not order. Antiferromagnetic spin ice (in both two and three dimensions) is found to undergo a transition to a long-range-ordered state. The thermal and magnetic critical exponents for this transition are calculated. It is found that the thermal exponent is that of the Ising universality class, whereas the magnetic critical exponent is different, as expected from the fact that the Zeeman term has a different symmetry in these systems. In addition, the recently introduced generalized constant coupling method is also applied to the calculation of the critical points and ground-state configurations. Again, a very good agreement is found with exact, Monte Carlo, and renormalization-group calculations for the critical points. Incidentally, we show that the generalized constant coupling approach can be regarded as the lowest-order limit of the EFRG technique, in which correlations outside a frustrated unit are neglected, and scaling is substituted by strict equality of the thermodynamic quantities.
NASA Astrophysics Data System (ADS)
An, Taeyang; Cha, Min-Chul
2013-03-01
We study the superfluid-insulator quantum phase transition in a disordered two-dimensional quantum rotor model with random on-site interactions in the presence of particle-hole symmetry. Via worm-algorithm Monte Carlo calculations of superfluid density and compressibility, we find the dynamical critical exponent z ~ 1 . 13 (2) and the correlation length critical exponent 1 / ν ~ 1 . 1 (1) . These exponents suggest that the insulating phase is a incompressible Mott glass rather than a Bose glass.
DOT National Transportation Integrated Search
2001-02-01
A new version of the CRCP computer program, CRCP-9, has been developed in this study. The numerical model of the CRC pavements was developed using finite element theories, the crack spacing prediction model was developed using the Monte Carlo method,...
A Monte Carlo Program for Simulating Selection Decisions from Personnel Tests
ERIC Educational Resources Information Center
Petersen, Calvin R.; Thain, John W.
1976-01-01
Relative to test and criterion parameters and cutting scores, the correlation coefficient, sample size, and number of samples to be drawn (all inputs), this program calculates decision classification rates across samples and for combined samples. Several other related indices are also computed. (Author)
NASA Astrophysics Data System (ADS)
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Calculation of self–shielding factor for neutron activation experiments using GEANT4 and MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero–Barrientos, Jaime, E-mail: jaromero@ing.uchile.cl; Universidad de Chile, DFI, Facultad de Ciencias Físicas Y Matemáticas, Avenida Blanco Encalada 2008, Santiago; Molina, F.
2016-07-07
The neutron self–shielding factor G as a function of the neutron energy was obtained for 14 pure metallic samples in 1000 isolethargic energy bins from 1·10{sup −5}eV to 2·10{sup 7}eV using Monte Carlo simulations in GEANT4 and MCNP6. The comparison of these two Monte Carlo codes shows small differences in the final self–shielding factor mostly due to the different cross section databases that each program uses.
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.
Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Errington, Jeffrey R.
2003-06-01
An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.
NASA Astrophysics Data System (ADS)
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overy, Catherine; Blunt, N. S.; Shepherd, James J.
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamicmore » itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.« less
Dendrimer-magnetic nanostructure: a Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.
2017-11-01
In this paper, the magnetic properties of ternary mixed spins (σ,S,q) Ising model on a dendrimer nanostructure are studied using Monte Carlo simulations. The ground state phase diagrams of dendrimer nanostructure with ternary mixed spins σ = 1/2, S = 1 and q = 3/2 Ising model are found. The variation of the thermal total and partial magnetizations with the different exchange interactions, the external magnetic fields and the crystal fields have been also studied. The reduced critical temperatures have been deduced. The magnetic hysteresis cycles have been discussed. In particular, the corresponding magnetic coercive filed values have been deduced. The multiples hysteresis cycles are found. The dendrimer nanostructure has several applications in the medicine.
Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins
NASA Astrophysics Data System (ADS)
Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.
2013-02-01
We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.
NASA Astrophysics Data System (ADS)
Kimura, Kenji; Higuchi, Saburo
2017-11-01
We introduce a novel random walk model that emerges in the event-chain Monte Carlo (ECMC) of spin systems. In the ECMC, the lifting variable specifying the spin to be updated changes its value to one of its interacting neighbor spins. This movement can be regarded as a random walk in a random environment with a feedback. We investigate this random walk numerically in the case of the classical XY model in 1, 2, and 3 dimensions to find that it is superdiffusive near the critical point of the underlying spin system. It is suggested that the performance improvement of the ECMC is related to this anomalous behavior.
Xu, Dong; Zhang, Yang
2012-07-01
Ab initio protein folding is one of the major unsolved problems in computational biology owing to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1-20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 nonhomologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in one-third cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction experiment, QUARK server outperformed the second and third best servers by 18 and 47% based on the cumulative Z-score of global distance test-total scores in the FM category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress toward the solution of the most important problem in the field. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Superconductivity and non-Fermi liquid behavior near a nematic quantum critical point.
Lederer, Samuel; Schattner, Yoni; Berg, Erez; Kivelson, Steven A
2017-05-09
Using determinantal quantum Monte Carlo, we compute the properties of a lattice model with spin [Formula: see text] itinerant electrons tuned through a quantum phase transition to an Ising nematic phase. The nematic fluctuations induce superconductivity with a broad dome in the superconducting [Formula: see text] enclosing the nematic quantum critical point. For temperatures above [Formula: see text], we see strikingly non-Fermi liquid behavior, including a "nodal-antinodal dichotomy" reminiscent of that seen in several transition metal oxides. In addition, the critical fluctuations have a strong effect on the low-frequency optical conductivity, resulting in behavior consistent with "bad metal" phenomenology.
Mohammadi, A; Hassanzadeh, M; Gharib, M
2016-02-01
In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dosimetric verification of IMRT treatment planning using Monte Carlo simulations for prostate cancer
NASA Astrophysics Data System (ADS)
Yang, J.; Li, J.; Chen, L.; Price, R.; McNeeley, S.; Qin, L.; Wang, L.; Xiong, W.; Ma, C.-M.
2005-03-01
The purpose of this work is to investigate the accuracy of dose calculation of a commercial treatment planning system (Corvus, Normos Corp., Sewickley, PA). In this study, 30 prostate intensity-modulated radiotherapy (IMRT) treatment plans from the commercial treatment planning system were recalculated using the Monte Carlo method. Dose-volume histograms and isodose distributions were compared. Other quantities such as minimum dose to the target (Dmin), the dose received by 98% of the target volume (D98), dose at the isocentre (Diso), mean target dose (Dmean) and the maximum critical structure dose (Dmax) were also evaluated based on our clinical criteria. For coplanar plans, the dose differences between Monte Carlo and the commercial treatment planning system with and without heterogeneity correction were not significant. The differences in the isocentre dose between the commercial treatment planning system and Monte Carlo simulations were less than 3% for all coplanar cases. The differences on D98 were less than 2% on average. The differences in the mean dose to the target between the commercial system and Monte Carlo results were within 3%. The differences in the maximum bladder dose were within 3% for most cases. The maximum dose differences for the rectum were less than 4% for all the cases. For non-coplanar plans, the difference in the minimum target dose between the treatment planning system and Monte Carlo calculations was up to 9% if the heterogeneity correction was not applied in Corvus. This was caused by the excessive attenuation of the non-coplanar beams by the femurs. When the heterogeneity correction was applied in Corvus, the differences were reduced significantly. These results suggest that heterogeneity correction should be used in dose calculation for prostate cancer with non-coplanar beam arrangements.
Basurto-Dávila, Ricardo; Meltzer, Martin I; Mills, Dora A; Beeler Asay, Garrett R; Cho, Bo-Hyun; Graitcer, Samuel B; Dube, Nancy L; Thompson, Mark G; Patel, Suchita A; Peasah, Samuel K; Ferdinands, Jill M; Gargiullo, Paul; Messonnier, Mark; Shay, David K
2017-12-01
To estimate the societal economic and health impacts of Maine's school-based influenza vaccination (SIV) program during the 2009 A(H1N1) influenza pandemic. Primary and secondary data covering the 2008-09 and 2009-10 influenza seasons. We estimated weekly monovalent influenza vaccine uptake in Maine and 15 other states, using difference-in-difference-in-differences analysis to assess the program's impact on immunization among six age groups. We also developed a health and economic Markov microsimulation model and conducted Monte Carlo sensitivity analysis. We used national survey data to estimate the impact of the SIV program on vaccine coverage. We used primary data and published studies to develop the microsimulation model. The program was associated with higher immunization among children and lower immunization among adults aged 18-49 years and 65 and older. The program prevented 4,600 influenza infections and generated $4.9 million in net economic benefits. Cost savings from lower adult vaccination accounted for 54 percent of the economic gain. Economic benefits were positive in 98 percent of Monte Carlo simulations. SIV may be a cost-beneficial approach to increase immunization during pandemics, but programs should be designed to prevent lower immunization among nontargeted groups. © Health Research and Educational Trust.
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
NASA Astrophysics Data System (ADS)
Serrano, Agostinho; Santos, Flávia M. T.; Greca, Ileana M.
2004-09-01
It is shown how basic aspects of ionic solvation structure, a fundamental topic for understanding different concepts and levels of representations of chemical structure and transformation, can be taught with the help of a Monte Carlo simulation package for molecular liquids. By performing a pair distribution function analysis of the solvation of Na + , Cl , and Ar in water, it is shown that it is feasible to explain the differences in solvation for these differently charged solutes. Visual representations of the solvated ions can also be employed to help the teaching activity. This may serve as an introduction to the study of solvation structure in chemistry undergraduate courses. The advantages of using tested, up-to-date scientific simulation programs as the fundamental bricks in the construction of virtual laboratories is also discussed.
MCMC multilocus lod scores: application of a new approach.
George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A
2005-01-01
On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).
Monte-Carlo simulation of soil carbon measurements by inelastic neutron scattering
USDA-ARS?s Scientific Manuscript database
Measuring soil carbon is critical for assessing the potential impact of different land management practices on carbon sequestration. The inelastic neutron scattering (INS) of fast neutrons (with energy around 14 MeV) on carbon-12 nuclei produces gamma rays with energy of 4.43 MeV; this gamma flux ca...
"First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).
Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert
2012-03-15
First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Rikvold, Per Arne; Brown, Gregory; Miyashita, Seiji; Omand, Conor; Nishino, Masamichi
2016-02-01
Phase diagrams and hysteresis loops were obtained by Monte Carlo simulations and a mean-field method for a simplified model of a spin-crossover material with a two-step transition between the high-spin and low-spin states. This model is a mapping onto a square-lattice S =1 /2 Ising model with antiferromagnetic nearest-neighbor and ferromagnetic Husimi-Temperley (equivalent-neighbor) long-range interactions. Phase diagrams obtained by the two methods for weak and strong long-range interactions are found to be similar. However, for intermediate-strength long-range interactions, the Monte Carlo simulations show that tricritical points decompose into pairs of critical end points and mean-field critical points surrounded by horn-shaped regions of metastability. Hysteresis loops along paths traversing the horn regions are strongly reminiscent of thermal two-step transition loops with hysteresis, recently observed experimentally in several spin-crossover materials. We believe analogous phenomena should be observable in experiments and simulations for many systems that exhibit competition between local antiferromagnetic-like interactions and long-range ferromagnetic-like interactions caused by elastic distortions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiratsuka, Tatsumasa; Tanaka, Hideki, E-mail: tanaka@cheme.kyoto-u.ac.jp; Miyahara, Minoru T., E-mail: miyahara@cheme.kyoto-u.ac.jp
Capillary condensation in the regime of developing hysteresis occurs at a vapor pressure, P{sub cond}, that is less than that of the vapor-like spinodal. This is because the energy barrier for the vapor-liquid transition from a metastable state at P{sub cond} becomes equal to the energy fluctuation of the system; however, a detailed mechanism of the spontaneous transition has not been acquired even through extensive experimental and simulation studies. We therefore construct accurate atomistic silica mesopore models for MCM-41 and perform molecular simulations (gauge cell Monte Carlo and grand canonical Monte Carlo) for argon adsorption on the models at subcriticalmore » temperatures. A careful comparison between the simulation and experiment reveals that the energy barrier for the capillary condensation has a critical dimensionless value, W{sub c}{sup *} = 0.175, which corresponds to the thermal fluctuation of the system and depends neither on the mesopore size nor on the temperature. We show that the critical energy barrier W{sub c}{sup *} controls the capillary condensation pressure P{sub cond} and also determines a boundary between the reversible condensation/evaporation regime and the developing hysteresis regime.« less
Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub
2016-12-02
We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.
Thermochemical Ablation Analysis of the Orion Heatshield
NASA Technical Reports Server (NTRS)
Sixel, William
2015-01-01
The Orion Multi-Purpose Crew Vehicle will one day carry astronauts to the Moon and beyond, and Orion's heatshield is a critical component in ensuring their safe return to Earth. The Orion heatshield is the structural component responsible for absorbing the intense heating environment caused by re-entry to Earth's atmosphere. The heatshield is primarily composed of Avcoat, an ablative material that is consumed during the re-entry process. Ablation is primarily characterized by two processes: pyrolysis and recession. The decomposition of in-depth virgin material is known as pyrolysis. Recession occurs when the exposed surface of the heatshield reacts with the surrounding flow. The Orion heatshield design was changed from an individually filled Avcoat honeycomb to a molded block Avcoat design. The molded block Avcoat heatshield relies on an adhesive bond to keep it attached to the capsule. In some locations on the heatshield, the integrity of the adhesive bond cannot be verified. For these locations, a mechanical retention device was proposed. Avcoat ablation was modelled in CHAR and the in-depth virgin material temperatures were used in a Thermal Desktop model of the mechanical retention device. The retention device was analyzed and shown to cause a large increase in the maximum bondline temperature. In order to study the impact of individual ablation modelling parameters on the heatshield sizing process, a Monte Carlo simulation of the sizing process was proposed. The simulation will give the sensitivity of the ablation model to each of its input parameters. As part of the Monte Carlo simulation, statistical uncertainties on material properties were required for Avcoat. Several properties were difficult to acquire uncertainties for: the pyrolysis gas enthalpy, non-dimensional mass loss rate (B´c), and Arrhenius equation parameters. Variability in the elemental composition of Avcoat was used as the basis for determining the statistical uncertainty in pyrolysis gas enthalpy and B´c. A MATLAB program was developed to allow for faster, more accurate and automated computation of Arrhenius reaction parameters. These parameters are required for a material model to be used in the CHAR ablation analysis program. This MATLAB program, along with thermogravimetric analysis (TGA) data, was used to generate uncertainties on the Arrhenius parameters for Avcoat. In addition, the TGA fitting program was developed to provide Arrhenius parameters for the ablation model of the gap filler material, RTV silicone.
ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.
ERIC Educational Resources Information Center
Vale, C. David; Gialluca, Kathleen A.
ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
Bahreyni Toossi, M T; Moradi, H; Zare, H
2008-01-01
In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.
NASA Astrophysics Data System (ADS)
Yang, Shengfeng; Zhou, Naixie; Zheng, Hui; Ong, Shyue Ping; Luo, Jian
2018-02-01
First-order interfacial phaselike transformations that break the mirror symmetry of the symmetric ∑5 (210 ) tilt grain boundary (GB) are discovered by combining a modified genetic algorithm with hybrid Monte Carlo and molecular dynamics simulations. Density functional theory calculations confirm this prediction. This first-order coupled structural and adsorption transformation, which produces two variants of asymmetric bilayers, vanishes at an interfacial critical point. A GB complexion (phase) diagram is constructed via semigrand canonical ensemble atomistic simulations for the first time.
Potts Model in One-Dimension on Directed Small-World Networks
NASA Astrophysics Data System (ADS)
Aquino, Édio O.; Lima, F. W. S.; Araújo, Ascânio D.; Costa Filho, Raimundo N.
2018-06-01
The critical properties of the Potts model with q=3 and 8 states in one-dimension on directed small-world networks are investigated. This disordered system is simulated by updating it with the Monte Carlo heat bath algorithm. The Potts model on these directed small-world networks presents in fact a second-order phase transition with a new set of critical exponents for q=3 considering a rewiring probability p=0.1. For q=8 the system exhibits only a first-order phase transition independent of p.
A journey from nuclear criticality methods to high energy density radflow experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urbatsch, Todd James
Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacitymore » platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy, but they sure are fun.« less
NASA Astrophysics Data System (ADS)
Davidson, N.; Golonka, P.; Przedziński, T.; Waş, Z.
2011-03-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Since 2002 new functionalities were introduced into the package. In particular, it now works with the HepMC event record, the standard for C++ programs. The complete set-up for benchmarking the interfaces, such as interface between τ-lepton production and decay, including QED bremsstrahlung effects is shown. The example is chosen to illustrate the new options introduced into the program. From the technical perspective, our paper documents software updates and supplements previous documentation. As in the past, our test consists of two steps. Distinct Monte Carlo programs are run separately; events with decays of a chosen particle are searched, and information is stored by MC-TESTER. Then, at the analysis step, information from a pair of runs may be compared and represented in the form of tables and plots. Updates introduced in the program up to version 1.24.4 are also documented. In particular, new configuration scripts or script to combine results from multitude of runs into single information file to be used in analysis step are explained. Program summaryProgram title: MC-TESTER, version 1.23 and version 1.24.4 Catalog identifier: ADSM_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 548 No. of bytes in distributed program, including test data, etc.: 4 290 610 Distribution format: tar.gz Programming language: C++, FORTRAN77 Tested and compiled with: gcc 3.4.6, 4.2.4 and 4.3.2 with g77/gfortran Computer: Tested on various platforms Operating system: Tested on operating systems: Linux SLC 4.6 and SLC 5, Fedora 8, Ubuntu 8.2 etc. Classification: 11.9 External routines: HepMC ( https://savannah.cern.ch/projects/hepmc/), PYTHIA8 ( http://home.thep.lu.se/~torbjorn/Pythia.html), LaTeX ( http://www.latex-project.org/) Catalog identifier of previous version: ADSM_v1_0 Journal reference of previous version: Comput. Phys. Comm. 157 (2004) 39 Does the new version supersede the previous version?: Yes Nature of problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable for the development of new programs, in order to check correctness of the installations or for discussion of uncertainties. Solution method: A typical HEP Monte Carlo program stores the generated events in event records such as HepMC, HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Reasons for new version: Interface for HepMC Event Record is introduced. Setup for benchmarking the interfaces, such as τ-lepton production and decay, including QED bremsstrahlung effects is introduced as well. This required significant changes in the algorithm. As a consequence, a new version of the code was introduced. Restrictions: Only the first 200 decay channels that were found will initialize histograms and if the multiplicity of decay products in a given channel was larger than 7, histograms will not be created for that channel. Additional comments: New features: HepMC interface, use of lists in definition of histograms and decay channels, filters for decay products or secondary decays to be omitted, bug fixing, extended flexibility in representation of program output, installation configuration scripts, merging multiple output files from separate generations. Running time: Varies substantially with the analyzed decay particle, but generally speed estimation of the old version remains valid. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; LATEX processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multiprocessor machines.
NASA Technical Reports Server (NTRS)
1976-01-01
The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.
Monte Carlo technique for very large ising models
NASA Astrophysics Data System (ADS)
Kalle, C.; Winkelmann, V.
1982-08-01
Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
2010-01-01
respectively. Conformations for all three systems were generated by exhaustive Monte Carlo searching. Relative conformational energies were calculated at the...routines of the Maestro(v. 6.5)/ Macromodel-Batchmin(8.6)21 suite of programs. The number of Monte Carlo steps for the searches was 500 000. Energy ...set using the B3LYP30,31 hybrid density functional. Single-point energies at the MP2/ aug-cc-pVDZ and MP2/aug-cc-pVTZ levels of theory were obtained
A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14
NASA Astrophysics Data System (ADS)
Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.
2000-03-01
Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.
The QUELCE Method: Using Change Drivers to Estimate Program Costs
2016-08-01
QUELCE computes a distribution of program costs based on Monte Carlo analysis of program cost drivers—assessed via analyses of dependency structure...possible scenarios. These include a dependency structure matrix to understand the interaction of change drivers for a specific project a...performed by the SEI or by company analysts. From the workshop results, analysts create a dependency structure matrix (DSM) of the change drivers
Advances in Monte-Carlo code TRIPOLI-4®'s treatment of the electromagnetic cascade
NASA Astrophysics Data System (ADS)
Mancusi, Davide; Bonin, Alice; Hugot, François-Xavier; Malouch, Fadhel
2018-01-01
TRIPOLI-4® is a Monte-Carlo particle-transport code developed at CEA-Saclay (France) that is employed in the domains of nuclear-reactor physics, criticality-safety, shielding/radiation protection and nuclear instrumentation. The goal of this paper is to report on current developments, validation and verification made in TRIPOLI-4 in the electron/positron/photon sector. The new capabilities and improvements concern refinements to the electron transport algorithm, the introduction of a charge-deposition score, the new thick-target bremsstrahlung option, the upgrade of the bremsstrahlung model and the improvement of electron angular straggling at low energy. The importance of each of the developments above is illustrated by comparisons with calculations performed with other codes and with experimental data.
Morales, Miguel A; Pierleoni, Carlo; Schwegler, Eric; Ceperley, D M
2010-07-20
Using quantum simulation techniques based on either density functional theory or quantum Monte Carlo, we find clear evidence of a first-order transition in liquid hydrogen, between a low conductivity molecular state and a high conductivity atomic state. Using the temperature dependence of the discontinuity in the electronic conductivity, we estimate the critical point of the transition at temperatures near 2,000 K and pressures near 120 GPa. Furthermore, we have determined the melting curve of molecular hydrogen up to pressures of 200 GPa, finding a reentrant melting line. The melting line crosses the metalization line at 700 K and 220 GPa using density functional energetics and at 550 K and 290 GPa using quantum Monte Carlo energetics.
Morse Monte Carlo Radiation Transport Code System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one maymore » determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)« less
Unreliable numbers: error and harm induced by bad design can be reduced by better design
Thimbleby, Harold; Oladimeji, Patrick; Cairns, Paul
2015-01-01
Number entry is a ubiquitous activity and is often performed in safety- and mission-critical procedures, such as healthcare, science, finance, aviation and in many other areas. We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems. A surprising finding is that many common, widely used systems are defective, and induce unnecessary human error. We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error. We demonstrate novel designs with improved resilience, implying that the common problems identified and the errors they induce are avoidable. PMID:26354830
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2014-03-01
We present sample CUDA programs for the GPU computing of the Swendsen-Wang multi-cluster spin flip algorithm. We deal with the classical spin models; the Ising model, the q-state Potts model, and the classical XY model. As for the lattice, both the 2D (square) lattice and the 3D (simple cubic) lattice are treated. We already reported the idea of the GPU implementation for 2D models (Komura and Okabe, 2012). We here explain the details of sample programs, and discuss the performance of the present GPU implementation for the 3D Ising and XY models. We also show the calculated results of the moment ratio for these models, and discuss phase transitions. Catalogue identifier: AERM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5632 No. of bytes in distributed program, including test data, etc.: 14688 Distribution format: tar.gz Programming language: C, CUDA. Computer: System with an NVIDIA CUDA enabled GPU. Operating system: System with an NVIDIA CUDA enabled GPU. Classification: 23. External routines: NVIDIA CUDA Toolkit 3.0 or newer Nature of problem: Monte Carlo simulation of classical spin systems. Ising, q-state Potts model, and the classical XY model are treated for both two-dimensional and three-dimensional lattices. Solution method: GPU-based Swendsen-Wang multi-cluster spin flip Monte Carlo method. The CUDA implementation for the cluster-labeling is based on the work by Hawick et al. [1] and that by Kalentev et al. [2]. Restrictions: The system size is limited depending on the memory of a GPU. Running time: For the parameters used in the sample programs, it takes about a minute for each program. Of course, it depends on the system size, the number of Monte Carlo steps, etc. References: [1] K.A. Hawick, A. Leist, and D. P. Playne, Parallel Computing 36 (2010) 655-678 [2] O. Kalentev, A. Rai, S. Kemnitzb, and R. Schneider, J. Parallel Distrib. Comput. 71 (2011) 615-620
A DMFT+CTQMC Investigation of Strange Metallicity in Local Quantum Critical Scenario
NASA Astrophysics Data System (ADS)
Acharya, Swagata; Laad, M. S.; Taraphder, A.
2016-10-01
“Strange” metallicity is now a pseudonym for a novel metallic state exhibiting anomalous infra-red (branch-cut) continuum features in one- and two-particle responses. Here, we employ dynamical mean-field theory (DMFT) using low-temperature continuous-time- quantum Monte-Carlo (CTQMC) solver for an extended periodic Anderson model (EPAM) model to investigate unusual magnetic fluctuations in the strange metal. We show how extinction of Landau quasiparticles in the orbital selective Mott phase (OSMP) leads to (i) qualitative explication of strange transport features and (ii) anomalous quantum critical magnetic fluctuations due to critical liquid-like features in dynamical spin fluctuations, in excellent accord with data in some f-electron systems.
Equilibrium and nonequilibrium models on solomon networks with two square lattices
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
We investigate the critical properties of the equilibrium and nonequilibrium two-dimensional (2D) systems on Solomon networks with both nearest and random neighbors. The equilibrium and nonequilibrium 2D systems studied here by Monte Carlo simulations are the Ising and Majority-vote 2D models, respectively. We calculate the critical points as well as the critical exponent ratios γ/ν, β/ν, and 1/ν. We find that numerically both systems present the same exponents on Solomon networks (2D) and are of different universality class than the regular 2D ferromagnetic model. Our results are in agreement with the Grinstein criterion for models with up and down symmetry on regular lattices.
Determining the solar-flare photospheric scale height from SMM gamma-ray measurements
NASA Technical Reports Server (NTRS)
Lingenfelter, Richard E.
1991-01-01
A connected series of Monte Carlo programs was developed to make systematic calculations of the energy, temporal and angular dependences of the gamma-ray line and neutron emission resulting from such accelerated ion interactions. Comparing the results of these calculations with the Solar Maximum Mission/Gamma Ray Spectrometer (SMM/GRS) measurements of gamma-ray line and neutron fluxes, the total number and energy spectrum of the flare-accelerated ions trapped on magnetic loops at the Sun were determined and the angular distribution, pitch angle scattering, and mirroring of the ions on loop fields were constrained. Comparing the calculations with measurements of the time dependence of the neutron capture line emission, a determination of the He-3/H ratio in the photosphere was also made. The diagnostic capabilities of the SMM/GRS measurements were extended by developing a new technique to directly determine the effective photospheric scale height in solar flares from the neutron capture gamma-ray line measurements, and critically test current atmospheric models in the flare region.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
NASA Astrophysics Data System (ADS)
Santos-Filho, J. B.; Plascak, J. A.
2017-09-01
The X Y vectorial generalization of the Blume-Emery-Griffiths (X Y -VBEG) model, which is suitable to be applied to the study of 3He-4He mixtures, is treated on thin films structure and its thermodynamical properties are analyzed as a function of the film thickness. We employ extensive and up-to-date Monte Carlo simulations consisting of hybrid algorithms combining lattice-gas moves, Metropolis, Wolff, and super-relaxation procedures to overcome the critical slowing down and correlations among different spin configurations of the system. We also make use of single histogram techniques to get the behavior of the thermodynamical quantities close to the corresponding transition temperatures. Thin films of the X Y -VBEG model present a quite rich phase diagram with Berezinskii-Kosterlitz-Thouless (BKT) transitions, BKT endpoints, and isolated critical points. As one varies the impurity concentrations along the layers, and in the limit of infinite film thickness, there is a coalescence of the BKT transition endpoint and the isolated critical point into a single, unique tricritical point. In addition, when mimicking the behavior of thin films of 3He-4He mixtures, one obtains that the concentration of 3He atoms decreases from the outer layers to the inner layers of the film, meaning that the superfluid particles tend to locate in the bulk of the system.
An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates
NASA Astrophysics Data System (ADS)
Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin
2014-03-01
The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.
Critical voids in exposure data and models lead risk assessors to rely on conservative assumptions. Risk assessors and managers need improved tools beyond the screening level analysis to address aggregate exposures to pesticides as required by the Food Quality Protection Act o...
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
A generalized theory of thin film growth
NASA Astrophysics Data System (ADS)
Du, Feng; Huang, Hanchen
2018-03-01
This paper reports a theory of thin film growth that is generalized for arbitrary incidence angle during physical vapor deposition in two dimensions. The accompanying kinetic Monte Carlo simulations serve as verification. A special theory already exists for thin film growth with zero incidence angle, and another theory also exists for nanorod growth with a glancing angle. The theory in this report serves as a bridge to describe the transition from thin film growth to nanorod growth. In particular, this theory gives two critical conditions in analytical form of critical coverage, ΘI and ΘII. The first critical condition defines the onset when crystal growth or step dynamics stops following the wedding cake model for thin film growth. The second critical condition defines the onset when multiple-layer surface steps form to enable nanorod growth. Further, this theory also reveals a critical incidence angle, below which nanorod growth is impossible. The critical coverages, together with the critical incidence angle, defines a phase diagram of thin growth versus nanorod growth.
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.
Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G
2012-11-05
Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.
MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling
2012-01-01
Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base. PMID:23126469
NASA Astrophysics Data System (ADS)
Sexton, James C.
1990-08-01
The GF11 project at IBM's T. J. Watson Research Center is entering full production for QCD numerical calculations. This paper describes the GF11 hardware and system software, and discusses the first production program which has been developed to run on GF11. This program is a variation of the Cabbibo Marinari pure gauge Monte Carlo program for SU(3) and is currently sustaining almost 6 gigaflops on 360 processors in GF11.
Xu, Dong; Zhang, Yang
2012-01-01
Ab initio protein folding is one of the major unsolved problems in computational biology due to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1–20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 non-homologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score (TM-score) >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in 1/3 cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction (CASP9) experiment, QUARK server outperformed the second and third best servers by 18% and 47% based on the cumulative Z-score of global distance test-total (GDT-TS) scores in the free modeling (FM) category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress towards the solution of the most important problem in the field. PMID:22411565
Modeling of thin-film GaAs growth
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.
1981-01-01
Efforts to produce a Monte Carlo computer program for the analysis of crystal growth are briefly discussed. A literature survey was conducted of articles relating to the subject. A list of references reviewed is presented.
In silico FRET from simulated dye dynamics
NASA Astrophysics Data System (ADS)
Hoefling, Martin; Grubmüller, Helmut
2013-03-01
Single molecule fluorescence resonance energy transfer (smFRET) experiments probe molecular distances on the nanometer scale. In such experiments, distances are recorded from FRET transfer efficiencies via the Förster formula, E=1/(1+(). The energy transfer however also depends on the mutual orientation of the two dyes used as distance reporter. Since this information is typically inaccessible in FRET experiments, one has to rely on approximations, which reduce the accuracy of these distance measurements. A common approximation is an isotropic and uncorrelated dye orientation distribution. To assess the impact of such approximations, we present the algorithms and implementation of a computational toolkit for the simulation of smFRET on the basis of molecular dynamics (MD) trajectory ensembles. In this study, the dye orientation dynamics, which are used to determine dynamic FRET efficiencies, are extracted from MD simulations. In a subsequent step, photons and bursts are generated using a Monte Carlo algorithm. The application of the developed toolkit on a poly-proline system demonstrated good agreement between smFRET simulations and experimental results and therefore confirms our computational method. Furthermore, it enabled the identification of the structural basis of measured heterogeneity. The presented computational toolkit is written in Python, available as open-source, applicable to arbitrary systems and can easily be extended and adapted to further problems. Catalogue identifier: AENV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv3, the bundled SIMD friendly Mersenne twister implementation [1] is provided under the SFMT-License. No. of lines in distributed program, including test data, etc.: 317880 No. of bytes in distributed program, including test data, etc.: 54774217 Distribution format: tar.gz Programming language: Python, Cython, C (ANSI C99). Computer: Any (see memory requirements). Operating system: Any OS with CPython distribution (e.g. Linux, MacOSX, Windows). Has the code been vectorised or parallelized?: Yes, in Ref. [2], 4 CPU cores were used. RAM: About 700MB per process for the simulation setup in Ref. [2]. Classification: 16.1, 16.7, 23. External routines: Calculation of Rκ2-trajectories from GROMACS [3] MD trajectories requires the GromPy Python module described in Ref. [4] or a GROMACS 4.6 installation. The md2fret program uses a standard Python interpreter (CPython) v2.6+ and < v3.0 as well as the NumPy module. The analysis examples require the Matplotlib Python module. Nature of problem: Simulation and interpretation of single molecule FRET experiments. Solution method: Combination of force-field based molecular dynamics (MD) simulating the dye dynamics and Monte Carlo sampling to obtain photon statistics of FRET kinetics. Additional comments: !!!!! The distribution file for this program is over 50 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: A single run in Ref. [2] takes about 10 min on a Quad Core Intel Xeon CPU W3520 2.67GHz with 6GB physical RAM References: [1] M. Saito, M. Matsumoto, SIMD-oriented fast Mersenne twister: a 128-bit pseudorandom number generator, in: A. Keller, S. Heinrich, H. Niederreiter (Eds.), Monte Carlo and Quasi-Monte Carlo Methods 2006, Springer; Berlin, Heidelberg, 2008, pp. 607-622. [2] M. Hoefling, N. Lima, D. Hänni, B. Schuler, C. A. M. Seidel, H. Grubmüller, Structural heterogeneity and quantitative FRET efficiency distributions of polyprolines through a hybrid atomistic simulation and Monte Carlo approach, PLoS ONE 6 (5) (2011) e19791. [3] D. V. D. Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark, H. J. C. Berendsen, GROMACS: fast, flexible, and free., J Comput Chem 26 (16) (2005) 1701-1718. [4] R. Pool, A. Feenstra, M. Hoefling, R. Schulz, J. C. Smith, J. Heringa, Enabling grand-canonical Monte Carlo: Extending the flexibility of gromacs through the GromPy Python interface module, Journal of Chemical Theory and Computation 33 (12) (2012) 1207-1214.
ERIC Educational Resources Information Center
Heifetz, Louis J.
1998-01-01
Comments on "The Small ICF/MR Program: Dimensions of Quality and Cost" (Conroy), that found small Intermediate Care Facilities (ICF) for individuals with mental retardation are inferior to other community programs. Acknowledges that while some research problems exist, no important evidence against the findings has been provided. (CR)
Mastering the game of Go with deep neural networks and tree search
NASA Astrophysics Data System (ADS)
Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis
2016-01-01
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Mastering the game of Go with deep neural networks and tree search.
Silver, David; Huang, Aja; Maddison, Chris J; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis
2016-01-28
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Stochastic series expansion simulation of the t -V model
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
Structure of interfaces at phase coexistence. Theory and numerics
NASA Astrophysics Data System (ADS)
Delfino, Gesualdo; Selke, Walter; Squarcini, Alessio
2018-05-01
We compare results of the exact field theory of phase separation in two dimensions with Monte Carlo simulations for the q-state Potts model with boundary conditions producing an interfacial region separating two pure phases. We confirm in particular the theoretical predictions that below critical temperature the surplus of non-boundary colors appears in drops along a single interface, while for q > 4 at critical temperature there is formation of two interfaces enclosing a macroscopic disordered layer. These qualitatively different structures of the interfacial region can be discriminated through a measurement at a single point for different system sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenfield, Bryce A.
2009-12-01
A detailed instructional manual was created to guide criticality safety engineers through the process of designing a criticality alarm system (CAS) for Department of Energy (DOE) hazard class 1 and 2 facilities. Regulatory and technical requirements were both addressed. A list of design tasks and technical subtasks are thoroughly analyzed to provide concise direction for how to complete the analysis. An example of the application of the design methodology, the Criticality Alarm System developed for the Radioisotope Production Laboratory (RPL) of Richland, Washington is also included. The analysis for RPL utilizes the Monte Carlo code MCNP5 for establishing detector coveragemore » in the facility. Significant improvements to the existing CAS were made that increase the reliability, transparency, and coverage of the system.« less
NASA Astrophysics Data System (ADS)
de Sousa, J. Ricardo; de Albuquerque, Douglas F.
1997-02-01
By using two approaches of renormalization group (RG), mean field RG (MFRG) and effective field RG (EFRG), we study the critical properties of the simple cubic lattice classical XY and classical Heisenberg models. The methods are illustrated by employing its simplest approximation version in which small clusters with one ( N‧ = 1) and two ( N = 2) spins are used. The thermal and magnetic critical exponents, Yt and Yh, and the critical parameter Kc are numerically obtained and are compared with more accurate methods (Monte Carlo, series expansion and ε-expansion). The results presented in this work are in excellent agreement with these sophisticated methods. We have also shown that the exponent Yh does not depend on the symmetry n of the Hamiltonian, hence the criteria of universality for this exponent is only a function of the dimension d.
Fermi gases with imaginary mass imbalance and the sign problem in Monte-Carlo calculations
NASA Astrophysics Data System (ADS)
Roscher, Dietrich; Braun, Jens; Chen, Jiunn-Wei; Drut, Joaquín E.
2014-05-01
Fermi gases in strongly coupled regimes are inherently challenging for many-body methods. Although progress has been made analytically, quantitative results require ab initio numerical approaches, such as Monte-Carlo (MC) calculations. However, mass-imbalanced and spin-imbalanced gases are not accessible to MC calculations due to the infamous sign problem. For finite spin imbalance, the problem can be circumvented using imaginary polarizations and analytic continuation, and large parts of the phase diagram then become accessible. We propose to apply this strategy to the mass-imbalanced case, which opens up the possibility to study the associated phase diagram with MC calculations. We perform a first mean-field analysis which suggests that zero-temperature studies, as well as detecting a potential (tri)critical point, are feasible.
Petit and grand ensemble Monte Carlo calculations of the thermodynamics of the lattice gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murch, G.E.; Thorn, R.J.
1978-11-01
A direct Monte Carlo method for estimating the chemical potential in the petit canonical ensemble was applied to the simple cubic Ising-like lattice gas. The method is based on a simple relationship between the chemical potential and the potential energy distribution in a lattice gas at equilibrium as derived independently by Widom, and Jackson and Klein. Results are presented here for the chemical potential at various compositions and temperatures above and below the zero field ferromagnetic and antiferromagnetic critical points. The same lattice gas model was reconstructed in the form of a restricted grand canonical ensemble and results at severalmore » temperatures were compared with those from the petit canonical ensemble. The agreement was excellent in these cases.« less
Morales, Miguel A.; Pierleoni, Carlo; Schwegler, Eric; Ceperley, D. M.
2010-01-01
Using quantum simulation techniques based on either density functional theory or quantum Monte Carlo, we find clear evidence of a first-order transition in liquid hydrogen, between a low conductivity molecular state and a high conductivity atomic state. Using the temperature dependence of the discontinuity in the electronic conductivity, we estimate the critical point of the transition at temperatures near 2,000 K and pressures near 120 GPa. Furthermore, we have determined the melting curve of molecular hydrogen up to pressures of 200 GPa, finding a reentrant melting line. The melting line crosses the metalization line at 700 K and 220 GPa using density functional energetics and at 550 K and 290 GPa using quantum Monte Carlo energetics. PMID:20566888
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Mars Microprobe Entry Analysis
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Mitcheltree, Robert A.; Cheatwood, F. McNeil
1998-01-01
The Mars Microprobe mission will provide the first opportunity for subsurface measurements, including water detection, near the south pole of Mars. In this paper, performance of the Microprobe aeroshell design is evaluated through development of a six-degree-of-freedom (6-DOF) aerodynamic database and flight dynamics simulation. Numerous mission uncertainties are quantified and a Monte-Carlo analysis is performed to statistically assess mission performance. Results from this 6-DOF Monte-Carlo simulation demonstrate that, in a majority of the cases (approximately 2-sigma), the penetrator impact conditions are within current design tolerances. Several trajectories are identified in which the current set of impact requirements are not satisfied. From these cases, critical design parameters are highlighted and additional system requirements are suggested. In particular, a relatively large angle-of-attack range near peak heating is identified.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
NASA Technical Reports Server (NTRS)
Horton, B. E.; Bowhill, S. A.
1971-01-01
This report describes a Monte Carlo simulation of transition flow around a sphere. Conditions for the simulation correspond to neutral monatomic molecules at two altitudes (70 and 75 km) in the D region of the ionosphere. Results are presented in the form of density contours, velocity vector plots and density, velocity and temperature profiles for the two altitudes. Contours and density profiles are related to independent Monte Carlo and experimental studies, and drag coefficients are calculated and compared with available experimental data. The small computer used is a PDP-15 with 16 K of core, and a typical run for 75 km requires five iterations, each taking five hours. The results are recorded on DECTAPE to be printed when required, and the program provides error estimates for any flow field parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Der Marck, S. C.
Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differencesmore » are probably caused by elements such as Be, C, Fe, Zr, W. (authors)« less
Thermal algebraic-decay charge liquid driven by competing short-range Coulomb repulsion
NASA Astrophysics Data System (ADS)
Kaneko, Ryui; Nonomura, Yoshihiko; Kohno, Masanori
2018-05-01
We explore the possibility of a Berezinskii-Kosterlitz-Thouless-like critical phase for the charge degrees of freedom in the intermediate-temperature regime between the charge-ordered and disordered phases in two-dimensional systems with competing short-range Coulomb repulsion. As the simplest example, we investigate the extended Hubbard model with on-site and nearest-neighbor Coulomb interactions on a triangular lattice at half filling in the atomic limit by using a classical Monte Carlo method, and find a critical phase, characterized by algebraic decay of the charge correlation function, belonging to the universality class of the two-dimensional XY model with a Z6 anisotropy. Based on the results, we discuss possible conditions for the critical phase in materials.
ERIC Educational Resources Information Center
Smith, Stacey L.; Vannest, Kimberly J.; Davis, John L.
2011-01-01
The reliability of data is a critical issue in decision-making for practitioners in the school. Percent Agreement and Cohen's kappa are the two most widely reported indices of inter-rater reliability, however, a recent Monte Carlo study on the reliability of multi-category scales found other indices to be more trustworthy given the type of data…
Kharakoz, Dmitry P; Panchelyuga, Maria S; Tiktopulo, Elizaveta I; Shlyapnikova, Elena A
2007-12-01
Chain-ordering/melting transition in a series of saturated diacylphosphatidylcholines (PCs) in aqueous dispersions have been studied experimentally (calorimetric and ultrasonic techniques) and theoretically (an Ising-like lattice model). The shape of the calorimetric curves was compared with the theoretical data and interpreted in terms of the lateral interactions and critical temperatures determined for each lipid studied. A critical chain length has been found (between 16 and 17 C-atoms per chain) which subdivides PCs into two classes with different phase behavior. In shorter lipids, the transition takes place above their critical temperatures meaning that this is an intrinsically continuous transition. In longer lipids, the transition occurs below the critical temperatures of the lipids, meaning that the transition is intrinsically discontinuous (first-order). This conclusion was supported independently by the ultrasonic relaxation sensitive to density fluctuations. Interestingly, it is this length that is the most abundant among the saturated chains in biological membranes.
Critical point and phase behavior of the pure fluid and a Lennard-Jones mixture
NASA Astrophysics Data System (ADS)
Potoff, Jeffrey J.; Panagiotopoulos, Athanassios Z.
1998-12-01
Monte Carlo simulations in the grand canonical ensemble were used to obtain liquid-vapor coexistence curves and critical points of the pure fluid and a binary mixture of Lennard-Jones particles. Critical parameters were obtained from mixed-field finite-size scaling analysis and subcritical coexistence data from histogram reweighting methods. The critical parameters of the untruncated Lennard-Jones potential were obtained as Tc*=1.3120±0.0007, ρc*=0.316±0.001 and pc*=0.1279±0.0006. Our results for the critical temperature and pressure are not in agreement with the recent study of Caillol [J. Chem. Phys. 109, 4885 (1998)] on a four-dimensional hypersphere. Mixture parameters were ɛ1=2ɛ2 and σ1=σ2, with Lorentz-Berthelot combining rules for the unlike-pair interactions. We determined the critical point at T*=1.0 and pressure-composition diagrams at three temperatures. Our results have much smaller statistical uncertainties relative to comparable Gibbs ensemble simulations.
Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi
2016-11-08
We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.
NASA Astrophysics Data System (ADS)
Kondrashova, Daria; Valiullin, Rustem; Kärger, Jörg; Bunde, Armin
2017-07-01
Nanoporous silicon consisting of tubular pores imbedded in a silicon matrix has found many technological applications and provides a useful model system for studying phase transitions under confinement. Recently, a model for mass transfer in these materials has been elaborated [Kondrashova et al., Sci. Rep. 7, 40207 (2017)], which assumes that adjacent channels can be connected by "bridges" (with probability pbridge) which allows diffusion perpendicular to the channels. Along the channels, diffusion can be slowed down by "necks" which occur with probability pneck. In this paper we use Monte-Carlo simulations to study diffusion along the channels and perpendicular to them, as a function of pbridge and pneck, and find remarkable correlations between the diffusivities in longitudinal and radial directions. For clarifying the diffusivity in radial direction, which is governed by the concentration of bridges, we applied percolation theory. We determine analytically how the critical concentration of bridges depends on the size of the system and show that it approaches zero in the thermodynamic limit. Our analysis suggests that the critical properties of the model, including the diffusivity in radial direction, are in the universality class of two-dimensional lattice percolation, which is confirmed by our numerical study.
Rikvold, Per Arne; Brown, Gregory; Miyashita, Seiji; ...
2016-02-16
Phase diagrams and hysteresis loops were obtained by Monte Carlo simulations and a mean- field method for a simplified model of a spin-crossovermaterialwith a two-step transition between the high-spin and low-spin states. This model is a mapping onto a square-lattice S = 1/2 Ising model with antiferromagnetic nearest-neighbor and ferromagnetic Husimi-Temperley ( equivalent-neighbor) long-range interactions. Phase diagrams obtained by the two methods for weak and strong long-range interactions are found to be similar. However, for intermediate-strength long-range interactions, the Monte Carlo simulations show that tricritical points decompose into pairs of critical end points and mean-field critical points surrounded by horn-shapedmore » regions of metastability. Hysteresis loops along paths traversing the horn regions are strongly reminiscent of thermal two-step transition loops with hysteresis, recently observed experimentally in several spin-crossover materials. As a result, we believe analogous phenomena should be observable in experiments and simulations for many systems that exhibit competition between local antiferromagnetic-like interactions and long-range ferromagnetic-like interactions caused by elastic distortions.« less
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
NASA Astrophysics Data System (ADS)
Chou, Shuo-Ju
2011-12-01
In recent years the United States has shifted from a threat-based acquisition policy that developed systems for countering specific threats to a capabilities-based strategy that emphasizes the acquisition of systems that provide critical national defense capabilities. This shift in policy, in theory, allows for the creation of an "optimal force" that is robust against current and future threats regardless of the tactics and scenario involved. In broad terms, robustness can be defined as the insensitivity of an outcome to "noise" or non-controlled variables. Within this context, the outcome is the successful achievement of defense strategies and the noise variables are tactics and scenarios that will be associated with current and future enemies. Unfortunately, a lack of system capability, budget, and schedule robustness against technology performance and development uncertainties has led to major setbacks in recent acquisition programs. This lack of robustness stems from the fact that immature technologies have uncertainties in their expected performance, development cost, and schedule that cause to variations in system effectiveness and program development budget and schedule requirements. Unfortunately, the Technology Readiness Assessment process currently used by acquisition program managers and decision-makers to measure technology uncertainty during critical program decision junctions does not adequately capture the impact of technology performance and development uncertainty on program capability and development metrics. The Technology Readiness Level metric employed by the TRA to describe program technology elements uncertainties can only provide a qualitative and non-descript estimation of the technology uncertainties. In order to assess program robustness, specifically requirements robustness, against technology performance and development uncertainties, a new process is needed. This process should provide acquisition program managers and decision-makers with the ability to assess or measure the robustness of program requirements against such uncertainties. A literature review of techniques for forecasting technology performance and development uncertainties and subsequent impacts on capability, budget, and schedule requirements resulted in the conclusion that an analysis process that coupled a probabilistic analysis technique such as Monte Carlo Simulations with quantitative and parametric models of technology performance impact and technology development time and cost requirements would allow the probabilities of meeting specific constraints of these requirements to be established. These probabilities of requirements success metrics can then be used as a quantitative and probabilistic measure of program requirements robustness against technology uncertainties. Combined with a Multi-Objective Genetic Algorithm optimization process and computer-based Decision Support System, critical information regarding requirements robustness against technology uncertainties can be captured and quantified for acquisition decision-makers. This results in a more informed and justifiable selection of program technologies during initial program definition as well as formulation of program development and risk management strategies. To meet the stated research objective, the ENhanced TEchnology Robustness Prediction and RISk Evaluation (ENTERPRISE) methodology was formulated to provide a structured and transparent process for integrating these enabling techniques to provide a probabilistic and quantitative assessment of acquisition program requirements robustness against technology performance and development uncertainties. In order to demonstrate the capabilities of the ENTERPRISE method and test the research Hypotheses, an demonstration application of this method was performed on a notional program for acquiring the Carrier-based Suppression of Enemy Air Defenses (SEAD) using Unmanned Combat Aircraft Systems (UCAS) and their enabling technologies. The results of this implementation provided valuable insights regarding the benefits and inner workings of this methodology as well as its limitations that should be addressed in the future to narrow the gap between current state and the desired state.
Monte Carlo calculations of the impact of a hip prosthesis on the dose distribution
NASA Astrophysics Data System (ADS)
Buffard, Edwige; Gschwind, Régine; Makovicka, Libor; David, Céline
2006-09-01
Because of the ageing of the population, an increasing number of patients with hip prostheses are undergoing pelvic irradiation. Treatment planning systems (TPS) currently available are not always able to accurately predict the dose distribution around such implants. In fact, only Monte Carlo simulation has the ability to precisely calculate the impact of a hip prosthesis during radiotherapeutic treatment. Monte Carlo phantoms were developed to evaluate the dose perturbations during pelvic irradiation. A first model, constructed with the DOSXYZnrc usercode, was elaborated to determine the dose increase at the tissue-metal interface as well as the impact of the material coating the prosthesis. Next, CT-based phantoms were prepared, using the usercode CTCreate, to estimate the influence of the geometry and the composition of such implants on the beam attenuation. Thanks to a program that we developed, the study was carried out with CT-based phantoms containing a hip prosthesis without metal artefacts. Therefore, anthropomorphic phantoms allowed better definition of both patient anatomy and the hip prosthesis in order to better reproduce the clinical conditions of pelvic irradiation. The Monte Carlo results revealed the impact of certain coatings such as PMMA on dose enhancement at the tissue-metal interface. Monte Carlo calculations in CT-based phantoms highlighted the marked influence of the implant's composition, its geometry as well as its position within the beam on dose distribution.
NASA Astrophysics Data System (ADS)
Gugsa, Solomon A.; Davies, Angela
2005-08-01
Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
Essays on the economics and econometrics of human capital
NASA Astrophysics Data System (ADS)
Mosso, Stefano
This thesis is composed by three distinct chapters. They are related by their common theme: the economic analysis of the process of human capital formation. The first chapter distills and extends the recent research on the economics of human development and social mobility. It critically analyzes the literature on the role of early life conditions in shaping multiple life skills with emphasis on the importance of critical and sensitive investments periods in influencing skill development. It develops economic models that rationalize the empirical evidence on treatment effects of social programs and on family influence. It investigates the empirical support of recent claims, made by part of the literature, on the relevance of credit constraints in limiting skill development. It shows how credit constraints are not a major force explaining differences in the amount of parental and self-investments in skills and how untargeted income transfer policies to poor families do not significantly boost child outcomes. The second chapter compares the performance of maximum likelihood and simulated methods of moments in estimating dynamic discrete choice models. It presents a structural model of education and shows how it can be used to estimate heterogeneous returns from schooling choices which account for their continuation values. Continuation values have a large impact on returns, but are ignored in the measures commonly used to assess the value of schooling choices. The estimates from the model are used to compute a synthetic dataset. This is used to assess the ability of maximum likelihood and simulated methods of moments to recover the model parameters. It finally proposes a Monte Carlo exercise to gain confidence on the performance of a simulated method of moments algorithm. The last chapter proposes a method to assess long run impacts on earnings of early interventions even in absence of long-term data collection on earnings histories for program participants. It combines the methodological approaches of the literature on program evaluation, data combination and forecasting to develop estimators of the average treatment effects. This exercise allows a more complete cost-benefit evaluation of social programs accounting for benefits over the whole life cycle.
Mostafa, Laoues; Rachid, Khelifi; Ahmed, Sidi Moussa
2016-08-01
Eye applicators with 90Sr/90Y and 106Ru/106Rh beta-ray sources are generally used in brachytherapy for the treatment of eye diseases as uveal melanoma. Whenever, radiation is used in treatment, dosimetry is essential. However, knowledge of the exact dose distribution is a critical decision-making to the outcome of the treatment. The Monte Carlo technique provides a powerful tool for calculation of the dose and dose distributions which helps to predict and determine the doses from different shapes of various types of eye applicators more accurately. The aim of this work consisted in using the Monte Carlo GATE platform to calculate the 3D dose distribution on a mathematical model of the human eye according to international recommendations. Mathematical models were developed for four ophthalmic applicators, two HDR 90Sr applicators SIA.20 and SIA.6, and two LDR 106Ru applicators, a concave CCB model and a flat CCB model. In present work, considering a heterogeneous eye phantom and the chosen tumor, obtained results with the use of GATE for mean doses distributions in a phantom and according to international recommendations show a discrepancy with respect to those specified by the manufacturers. The QC of dosimetric parameters shows that contrarily to the other applicators, the SIA.20 applicator is consistent with recommendations. The GATE platform show that the SIA.20 applicator present better results, namely the dose delivered to critical structures were lower compared to those obtained for the other applicators, and the SIA.6 applicator, simulated with MCNPX generates higher lens doses than those generated by GATE. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
NASA Astrophysics Data System (ADS)
España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M
2009-03-01
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less
Abdul-Aziz, Mohd H; Abd Rahman, Azrin N; Mat-Nor, Mohd-Basri; Sulaiman, Helmi; Wallis, Steven C; Lipman, Jeffrey; Roberts, Jason A; Staatz, Christine E
2016-01-01
Doripenem has been recently introduced in Malaysia and is used for severe infections in the intensive care unit. However, limited data currently exist to guide optimal dosing in this scenario. We aimed to describe the population pharmacokinetics of doripenem in Malaysian critically ill patients with sepsis and use Monte Carlo dosing simulations to develop clinically relevant dosing guidelines for these patients. In this pharmacokinetic study, 12 critically ill adult patients with sepsis receiving 500 mg of doripenem every 8 h as a 1-hour infusion were enrolled. Serial blood samples were collected on 2 different days, and population pharmacokinetic analysis was performed using a nonlinear mixed-effects modeling approach. A two-compartment linear model with between-subject and between-occasion variability on clearance was adequate in describing the data. The typical volume of distribution and clearance of doripenem in this cohort were 0.47 liters/kg and 0.14 liters/kg/h, respectively. Doripenem clearance was significantly influenced by patients' creatinine clearance (CL(CR)), such that a 30-ml/min increase in the estimated CL(CR) would increase doripenem CL by 52%. Monte Carlo dosing simulations suggested that, for pathogens with a MIC of 8 mg/liter, a dose of 1,000 mg every 8 h as a 4-h infusion is optimal for patients with a CL(CR) of 30 to 100 ml/min, while a dose of 2,000 mg every 8 h as a 4-h infusion is best for patients manifesting a CL(CR) of >100 ml/min. Findings from this study suggest that, for doripenem usage in Malaysian critically ill patients, an alternative dosing approach may be meritorious, particularly when multidrug resistance pathogens are involved. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Abd Rahman, Azrin N.; Mat-Nor, Mohd-Basri; Sulaiman, Helmi; Wallis, Steven C.; Lipman, Jeffrey; Roberts, Jason A.; Staatz, Christine E.
2015-01-01
Doripenem has been recently introduced in Malaysia and is used for severe infections in the intensive care unit. However, limited data currently exist to guide optimal dosing in this scenario. We aimed to describe the population pharmacokinetics of doripenem in Malaysian critically ill patients with sepsis and use Monte Carlo dosing simulations to develop clinically relevant dosing guidelines for these patients. In this pharmacokinetic study, 12 critically ill adult patients with sepsis receiving 500 mg of doripenem every 8 h as a 1-hour infusion were enrolled. Serial blood samples were collected on 2 different days, and population pharmacokinetic analysis was performed using a nonlinear mixed-effects modeling approach. A two-compartment linear model with between-subject and between-occasion variability on clearance was adequate in describing the data. The typical volume of distribution and clearance of doripenem in this cohort were 0.47 liters/kg and 0.14 liters/kg/h, respectively. Doripenem clearance was significantly influenced by patients' creatinine clearance (CLCR), such that a 30-ml/min increase in the estimated CLCR would increase doripenem CL by 52%. Monte Carlo dosing simulations suggested that, for pathogens with a MIC of 8 mg/liter, a dose of 1,000 mg every 8 h as a 4-h infusion is optimal for patients with a CLCR of 30 to 100 ml/min, while a dose of 2,000 mg every 8 h as a 4-h infusion is best for patients manifesting a CLCR of >100 ml/min. Findings from this study suggest that, for doripenem usage in Malaysian critically ill patients, an alternative dosing approach may be meritorious, particularly when multidrug resistance pathogens are involved. PMID:26482304
Statistical Evaluation of Time Series Analysis Techniques
NASA Technical Reports Server (NTRS)
Benignus, V. A.
1973-01-01
The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.
Probabilistic analysis algorithm for UA slope software program.
DOT National Transportation Integrated Search
2013-12-01
A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...
Risk/benefit assessment of delayed action concept for rail inspection
DOT National Transportation Integrated Search
1999-02-01
A Monte Carlo simulation of certain aspects of rail inspection is presented. The simulation is used to investigate alternative practices in railroad rail inspection programs. Results are presented to compare the present practice of immediately repair...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel
2011-03-15
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult malemore » and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different CT scan ranges and technical parameters. Organ doses from existing commercial programs do not reasonably match organ doses calculated for the hybrid phantoms due to differences in phantom anatomy, as well as differences in organ dose scaling parameters. The organ dose matrices developed in this study will be extended to cover different technical parameters, CT scanner models, and various age groups.« less
Stochastic Analysis of Orbital Lifetimes of Spacecraft
NASA Technical Reports Server (NTRS)
Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David
2008-01-01
A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, Andreu; Badano, Aldo
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less
NASA Astrophysics Data System (ADS)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)
A journey from nuclear criticality methods to high energy density radflow experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urbatsch, Todd James
Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacitymore » platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy and they are as saturated with politics as a presidential election, but they sure are fun.« less
Organization and use of a Software/Hardware Avionics Research Program (SHARP)
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.; Kareemi, M. N.
1975-01-01
The organization and use is described of the software/hardware avionics research program (SHARP) developed to duplicate the automatic portion of the STOLAND simulator system, on a general-purpose computer system (i.e., IBM 360). The program's uses are: (1) to conduct comparative evaluation studies of current and proposed airborne and ground system concepts via single run or Monte Carlo simulation techniques, and (2) to provide a software tool for efficient algorithm evaluation and development for the STOLAND avionics computer.
2014-10-01
the angles and dihedrals that are truly unique will be indicated by the user by editing NewAngleTypesDump and NewDihedralTypesDump. The program ...Atomistic Molecular Simulations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Robert M Elder, Timothy W Sirk, and...Antechamber program in Assisted Model Building with Energy Refinement (AMBER) Tools to assign partial charges (using the Austin Model 1 [AM1]-bond charge
Error analysis of Dobson spectrophotometer measurements of the total ozone content
NASA Technical Reports Server (NTRS)
Holland, A. C.; Thomas, R. W. L.
1975-01-01
A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.
Splash evaluation of SRB designs
NASA Technical Reports Server (NTRS)
Counter, D. N.
1974-01-01
A technique is developed to optimize the shuttle solid rocket booster (SRB) design for water impact loads. The SRB is dropped by parachute and recovered at sea for reuse. Loads experienced at water impact are design critical. The probability of each water impact load is determined using a Monte Carlo technique and an aerodynamic analysis of the SRB parachute system. Meteorological effects are included and four configurations are evaluated.
Recalibration of indium foil for personnel screening in criticality accidents.
Takada, C; Tsujimura, N; Mikami, S
2011-03-01
At the Nuclear Fuel Cycle Engineering Laboratories of the Japan Atomic Energy Agency (JAEA), small pieces of indium foil incorporated into personal dosemeters have been used for personnel screening in criticality accidents. Irradiation tests of the badges were performed using the SILENE reactor to verify the calibration of the indium activation that had been made in the 1980s and to recalibrate them for simulated criticalities that would be the most likely to occur in the solution process line. In addition, Monte Carlo calculations of the indium activation using the badge model were also made to complement the spectral dependence. The results lead to a screening level of 15 kcpm being determined that corresponds to a total dose of 0.25 Gy, which is also applicable in posterior-anterior exposure. The recalibration based on the latest study will provide a sounder basis for the screening procedure in the event of a criticality accident.
Driving a Superconductor to Insulator Transition with Random Gauge Fields.
Nguyen, H Q; Hollen, S M; Shainline, J; Xu, J M; Valles, J M
2016-11-30
Typically the disorder that alters the interference of particle waves to produce Anderson localization is potential scattering from randomly placed impurities. Here we show that disorder in the form of random gauge fields that act directly on particle phases can also drive localization. We present evidence of a superfluid bose glass to insulator transition at a critical level of this gauge field disorder in a nano-patterned array of amorphous Bi islands. This transition shows signs of metallic transport near the critical point characterized by a resistance , indicative of a quantum phase transition. The critical disorder depends on interisland coupling in agreement with recent Quantum Monte Carlo simulations. We discuss how this disorder tuned SIT differs from the common frustration tuned SIT that also occurs in magnetic fields. Its discovery enables new high fidelity comparisons between theoretical and experimental studies of disorder effects on quantum critical systems.
2017-01-01
A thermodynamic model of thermoregulatory huddling interactions between endotherms is developed. The model is presented as a Monte Carlo algorithm in which animals are iteratively exchanged between groups, with a probability of exchanging groups defined in terms of the temperature of the environment and the body temperatures of the animals. The temperature-dependent exchange of animals between groups is shown to reproduce a second-order critical phase transition, i.e., a smooth switch to huddling when the environment gets colder, as measured in recent experiments. A peak in the rate at which group sizes change, referred to as pup flow, is predicted at the critical temperature of the phase transition, consistent with a thermodynamic description of huddling, and with a description of the huddle as a self-organising system. The model was subjected to a simple evolutionary procedure, by iteratively substituting the physiologies of individuals that fail to balance the costs of thermoregulation (by huddling in groups) with the costs of thermogenesis (by contributing heat). The resulting tension between cooperative and competitive interactions was found to generate a phenomenon called self-organised criticality, as evidenced by the emergence of avalanches in fitness that propagate across many generations. The emergence of avalanches reveals how huddling can introduce correlations in fitness between individuals and thereby constrain evolutionary dynamics. Finally, a full agent-based model of huddling interactions is also shown to generate criticality when subjected to the same evolutionary pressures. The agent-based model is related to the Monte Carlo model in the way that a Vicsek model is related to an Ising model in statistical physics. Huddling therefore presents an opportunity to use thermodynamic theory to study an emergent adaptive animal behaviour. In more general terms, huddling is proposed as an ideal system for investigating the interaction between self-organisation and natural selection empirically. PMID:28141809
NASA Astrophysics Data System (ADS)
Jabar, A.; Tahiri, N.; Bahmad, L.; Benyoussef, A.
2016-11-01
A bi-layer system consisting of layers of spins (7/2, 3) in a ferromagnetic dendrimer structure, separated by a non-magnetic spacer, is studied by Monte Carlo simulations. The effect of the RKKY interactions is investigated and discussed for such system. It is shown that the magnetic properties in the two magnetic layers depend strongly on the thickness of the magnetic and non-magnetic layers. The total magnetizations and susceptibilities are studied as a function of the reduced temperature. The effect of the reduced exchange interactions as well as the reduced crystal field is outlined. On other hand, the critical temperature is discussed as a function of the magnetic layer values. To complete this study we presented and discussed the magnetic hysteresis cycles.
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
NASA Astrophysics Data System (ADS)
Gao, Wanbao; Raeside, David E.
1997-12-01
Dose distributions that result from treating a patient with orthovoltage beams are best determined with a treatment planning system that uses the Monte Carlo method, and such systems are not readily available. In the present work, the Monte Carlo method was used to develop a computer code for determining absorbed dose distributions in orthovoltage radiation therapy. The code was used in planning treatment of a patient with a neuroendocrine carcinoma of the maxillary sinus. Two lateral high-energy photon beams supplemented by an anterior orthovoltage photon beam were utilized in the treatment plan. For the clinical case and radiation beams considered, a reasonably uniform dose distribution
is achieved within the target volume, while the dose to the lens of each eye is 4 - 8% of the prescribed dose. Therefore, an orthovoltage photon beam, when properly filtered and optimally combined with megavoltage beams, can be effective in the treatment of cancers below the skin, providing that accurate treatment planning is carried out to establish with accuracy and precision the doses to critical structures.
Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard
2014-06-01
Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.
NASA Technical Reports Server (NTRS)
Wiscombe, W.
1999-01-01
The purpose of this paper is discuss the concept of fractal dimension; multifractal statistics as an extension of this; the use of simple multifractal statistics (power spectrum, structure function) to characterize cloud liquid water data; and to understand the use of multifractal cloud liquid water models based on real data as input to Monte Carlo radiation models of shortwave radiation transfer in 3D clouds, and the consequences of this in two areas: the design of aircraft field programs to measure cloud absorptance; and the explanation of the famous "Landsat scale break" in measured radiance.
Monte Carlo simulations of neutron-scattering instruments using McStas
NASA Astrophysics Data System (ADS)
Nielsen, K.; Lefmann, K.
2000-06-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.
Block voter model: Phase diagram and critical behavior
NASA Astrophysics Data System (ADS)
Sampaio-Filho, C. I. N.; Moreira, F. G. B.
2011-11-01
We introduce and study the block voter model with noise on two-dimensional square lattices using Monte Carlo simulations and finite-size scaling techniques. The model is defined by an outflow dynamics where a central set of NPCS spins, here denoted by persuasive cluster spins (PCS), tries to influence the opinion of their neighboring counterparts. We consider the collective behavior of the entire system with varying PCS size. When NPCS>2, the system exhibits an order-disorder phase transition at a critical noise parameter qc which is a monotonically increasing function of the size of the persuasive cluster. We conclude that a larger PCS has more power of persuasion, when compared to a smaller one. It also seems that the resulting critical behavior is Ising-like independent of the range of interaction.
Weak- versus strong-disorder superfluid—Bose glass transition in one dimension
NASA Astrophysics Data System (ADS)
Doggen, Elmer V. H.; Lemarié, Gabriel; Capponi, Sylvain; Laflorencie, Nicolas
2017-11-01
Using large-scale simulations based on matrix product state and quantum Monte Carlo techniques, we study the superfluid to Bose glass transition for one-dimensional attractive hard-core bosons at zero temperature, across the full regime from weak to strong disorder. As a function of interaction and disorder strength, we identify a Berezinskii-Kosterlitz-Thouless critical line with two different regimes. At small attraction where critical disorder is weak compared to the bandwidth, the critical Luttinger parameter Kc takes its universal Giamarchi-Schulz value Kc=3 /2 . Conversely, a nonuniversal Kc>3 /2 emerges for stronger attraction where weak-link physics is relevant. In this strong-disorder regime, the transition is characterized by self-similar power-law-distributed weak links with a continuously varying characteristic exponent α .
Robust criticality of an Ising model on rewired directed networks
NASA Astrophysics Data System (ADS)
Lipowski, Adam; Gontarek, Krzysztof; Lipowska, Dorota
2015-06-01
We show that preferential rewiring, which is supposed to mimic the behavior of financial agents, changes a directed-network Ising ferromagnet with a single critical point into a model with robust critical behavior. For the nonrewired random graph version, due to a constant number of out-links for each site, we write a simple mean-field-like equation describing the behavior of magnetization; we argue that it is exact and support the claim with extensive Monte Carlo simulations. For the rewired version, this equation is obeyed only at low temperatures. At higher temperatures, rewiring leads to strong heterogeneities, which apparently invalidates mean-field arguments and induces large fluctuations and divergent susceptibility. Such behavior is traced back to the formation of a relatively small core of agents that influence the entire system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less
Thermal gradients for the stabilization of a single domain wall in magnetic nanowires.
Mejía-López, J; Velásquez, E A; Mazo-Zuluaga, J; Altbir, D
2018-08-24
By means of Monte Carlo simulations we studied field driven nucleation and propagation of transverse domain walls (DWs) in magnetic nanowires subjected to temperature gradients. Simulations identified the existence of critical thermal gradients that allow the existence of reversal processes driven by a single DW. Critical thermal gradients depend on external parameters such as temperature, magnetic field and wire length, and can be experimentally obtained through the measurement of the mean velocity of the magnetization reversal as a function of the temperature gradient. Our results show that temperature gradients provide a high degree of control over DW propagation, which is of great importance for technological applications.
On the origin of the super-spreading events in the SARS epidemic
NASA Astrophysics Data System (ADS)
Fang, Haiping; Chen, Jixiu; Hu, Jun; Xu, Lisa X.
2004-10-01
"Super-spread events" (SSEs), which have been observed in Singapore, Hong Kong in China and many cities all over the world, usually have a large influence on the early course of the epidemics. The understanding of these SSEs is critical to the containment of SARS. In this letter it is shown that the possibility of SSEs is still high enough even when the virulences are equal for all the infective individuals, based on a simple spatial-relevant Monte Carlo model (SEIR). The long latent periods play a critical role in the appearance of SSEs. The heterogeneity of the activities of infective cases can also increase the possibility.
NASA Astrophysics Data System (ADS)
Krása, Antonín; Kochetkov, Anatoly; Baeten, Peter; Vittiglio, Guido; Wagemans, Jan; Bécares, Vicente
2017-09-01
VENUS-F is a fast, zero-power reactor with 30% wt. metallic uranium fuel and solid lead as coolant simulator. It serves as a mockup of the MYRRHA reactor core. This paper describes integral experiments performed in two critical VENUS-F core configurations (with and without graphite reflector). Discrepancies between experiments and Monte Carlo calculations (MCNP5) of keff, fission rate spatial distribution and reactivity effects (lead void and fuel Doppler) depending on a nuclear data library used (JENDL-4.0, ENDF-B-VII.1, JEFF-3.1.2, 3.2, 3.3T2) are presented.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badiev, M. K., E-mail: m-zagir@mail.ru; Murtazaev, A. K.; Ramazanov, M. K.
2016-10-15
The phase transitions (PTs) and critical properties of the antiferromagnetic Ising model on a layered (stacked) triangular lattice have been studied by the Monte Carlo method using a replica algorithm with allowance for the next-nearest-neighbor interactions. The character of PTs is analyzed using the histogram technique and the method of Binder cumulants. It is established that the transition from the disordered to paramagnetic phase in the adopted model is a second-order PT. Static critical exponents of the heat capacity (α), susceptibility (γ), order parameter (β), and correlation radius (ν) and the Fischer exponent η are calculated using the finite-size scalingmore » theory. It is shown that (i) the antiferromagnetic Ising model on a layered triangular lattice belongs to the XY universality class of critical behavior and (ii) allowance for the intralayer interactions of next-nearest neighbors in the adopted model leads to a change in the universality class of critical behavior.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marous, L; Muryn, J; Liptak, C
2016-06-15
Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less
Risk Assessment Techniques. A Handbook for Program Management Personnel
1983-07-01
tion; not directly usable without further development. 37. Lieber, R.S., "New Approaches for Quantifying Risk and Determining Sharing Arrangements...must be provided. Prediction intervals around cost estimating relationships (CERs) or Monte Carlo simulations will be used as proper in quantifying ... risk ." [emphasis supplied] Para 9.d. "The ISR will address the potential risk in the program office estimate by identifying ’risk’ areas and their
NASA Astrophysics Data System (ADS)
Dance, David R.; McVey, Graham; Sandborg, Michael P.; Persliden, Jan; Carlsson, Gudrun A.
1999-05-01
A Monte Carlo program has been developed to model X-ray imaging systems. It incorporates an adult voxel phantom and includes anti-scatter grid, radiographic screen and film. The program can calculate contrast and noise for a series of anatomical details. The use of measured H and D curves allows the absolute calculation of the patient entrance air kerma for a given film optical density (or vice versa). Effective dose can also be estimated. In an initial validation, the program was used to predict the optical density for exposures with plastic slabs of various thicknesses. The agreement between measurement and calculation was on average within 5%. In a second validation, a comparison was made between computer simulations and measurements for chest and lumbar spine patient radiographs. The predictions of entrance air kerma mostly fell within the range of measured values (e.g. chest PA calculated 0.15 mGy, measured 0.12 - 0.17 mGy). Good agreement was also obtained for the calculated and measured contrasts for selected anatomical details and acceptable agreement for dynamic range. It is concluded that the program provides a realistic model of the patient and imaging system. It can thus form the basis of a detailed study and optimization of X-ray imaging systems.
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice
2016-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.
NASA Astrophysics Data System (ADS)
Mortuza, Md Firoz; Lepore, Luigi; Khedkar, Kalpana; Thangam, Saravanan; Nahar, Arifatun; Jamil, Hossen Mohammad; Bandi, Laxminarayan; Alam, Md Khorshed
2018-03-01
Characterization of a 90 kCi (3330 TBq), semi-industrial, cobalt-60 gamma irradiator was performed by commissioning dosimetry and in-situ dose mapping experiments with Ceric-cerous and Fricke dosimetry systems. Commissioning dosimetry was carried out to determine dose distribution pattern of absorbed dose in the irradiation cell and products. To determine maximum and minimum absorbed dose, overdose ratio and dwell time of the tote boxes, homogeneous dummy product (rice husk) with a bulk density of 0.13 g/cm3 were used in the box positions of irradiation chamber. The regions of minimum absorbed dose of the tote boxes were observed in the lower zones of middle plane and maximum absorbed doses were found in the middle position of front plane. Moreover, as a part of dose mapping, dose rates in the wall positions and some selective strategic positions were also measured to carry out multiple irradiation program simultaneously, especially for low dose research irradiation program. In most of the cases, Monte Carlo simulation data, using Monte Carlo N-Particle eXtended code version MCNPX 2.7., were found to be in congruence with experimental values obtained from Ceric-cerous and Fricke dosimetry; however, in close proximity positions from the source, the dose rate variation between chemical dosimetry and MCNP was higher than distant positions.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Infrared Spectroscopy of Star Formation in Galactic and Extragalactic Regions
NASA Technical Reports Server (NTRS)
Smith, Howard A.; Hasan, Hashima (Technical Monitor)
2004-01-01
Last year we submitted and had accepted a paper entitled "The Far-Infrared Emission Line and Continuum Spectrum of the Seyfert Galaxy NGC 1068," by Spinoglio, L., Malkan, M., Smith. HA, Gonzalez-Alfonso, E., and Fischer, J. This analysis was based on the SWAS Monte Carlo code modeling of the OH lines in galaxies observed by ISO. Since that meeting last spring considerable effort has been put into improving the Monte Carlo code. A group of European astronomers, including Prof. Eduardo Gonzalez-Alfonso, had been performing Monte Carlo modeling of other molecules seen in ISO galaxies. We used portions of this grant to bring Prof. Gonzalez-Alfonso to Cambridge for an intensive working visit. A second major paper on the ISO IR spectroscopy of galaxies, "The Far Infrared Spectrum of Arp 220," Gonzalez-Alfonso, E., Smith. H., Fischer, J., and Cernicharo, J., is in press. Spitzer science development was the major component of this past year;s research. This program supported the development of five Early Release Objects for Spitzer observations on which Dr. Smith was Principal Investigator or Co-Investigator, and another five proposals for GO time. The early release program is designed to rapidly present to the public and the scientific community some exciting results from Spitzer in the first months of its operation. The Spitzer instrument and science teams submitted proposals for ERO objects, and a competitive selection process narrowed these down to a small group with exciting science and realistic observational parameters. This grant supported Dr. Smith's participation in the ERO process, including developing science goals, identifying key objects for observation, and developing the detailed AOR (observing formulae) to be use by the instruments for mapping, integrating, etc.). During this year Dr. Smith worked on writing up and publishing these early results. The attached bibliography includes six of Dr. Smith's articles. During this past year Dr. Smith also led or helped to develop proposals for ten Spitzer GO Programs, and three others. Appendix B lists the programs involved.
SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran
NASA Astrophysics Data System (ADS)
Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.
2008-03-01
We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.
A Hardware-Accelerated Quantum Monte Carlo framework (HAQMC) for N-body systems
NASA Astrophysics Data System (ADS)
Gothandaraman, Akila; Peterson, Gregory D.; Warren, G. Lee; Hinde, Robert J.; Harrison, Robert J.
2009-12-01
Interest in the study of structural and energetic properties of highly quantum clusters, such as inert gas clusters has motivated the development of a hardware-accelerated framework for Quantum Monte Carlo simulations. In the Quantum Monte Carlo method, the properties of a system of atoms, such as the ground-state energies, are averaged over a number of iterations. Our framework is aimed at accelerating the computations in each iteration of the QMC application by offloading the calculation of properties, namely energy and trial wave function, onto reconfigurable hardware. This gives a user the capability to run simulations for a large number of iterations, thereby reducing the statistical uncertainty in the properties, and for larger clusters. This framework is designed to run on the Cray XD1 high performance reconfigurable computing platform, which exploits the coarse-grained parallelism of the processor along with the fine-grained parallelism of the reconfigurable computing devices available in the form of field-programmable gate arrays. In this paper, we illustrate the functioning of the framework, which can be used to calculate the energies for a model cluster of helium atoms. In addition, we present the capabilities of the framework that allow the user to vary the chemical identities of the simulated atoms. Program summaryProgram title: Hardware Accelerated Quantum Monte Carlo (HAQMC) Catalogue identifier: AEEP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 691 537 No. of bytes in distributed program, including test data, etc.: 5 031 226 Distribution format: tar.gz Programming language: C/C++ for the QMC application, VHDL and Xilinx 8.1 ISE/EDK tools for FPGA design and development Computer: Cray XD1 consisting of a dual-core, dualprocessor AMD Opteron 2.2 GHz with a Xilinx Virtex-4 (V4LX160) or Xilinx Virtex-II Pro (XC2VP50) FPGA per node. We use the compute node with the Xilinx Virtex-4 FPGA Operating system: Red Hat Enterprise Linux OS Has the code been vectorised or parallelized?: Yes Classification: 6.1 Nature of problem: Quantum Monte Carlo is a practical method to solve the Schrödinger equation for large many-body systems and obtain the ground-state properties of such systems. This method involves the sampling of a number of configurations of atoms and averaging the properties of the configurations over a number of iterations. We are interested in applying the QMC method to obtain the energy and other properties of highly quantum clusters, such as inert gas clusters. Solution method: The proposed framework provides a combined hardware-software approach, in which the QMC simulation is performed on the host processor, with the computationally intensive functions such as energy and trial wave function computations mapped onto the field-programmable gate array (FPGA) logic device attached as a co-processor to the host processor. We perform the QMC simulation for a number of iterations as in the case of our original software QMC approach, to reduce the statistical uncertainty of the results. However, our proposed HAQMC framework accelerates each iteration of the simulation, by significantly reducing the time taken to calculate the ground-state properties of the configurations of atoms, thereby accelerating the overall QMC simulation. We provide a generic interpolation framework that can be extended to study a variety of pure and doped atomic clusters, irrespective of the chemical identities of the atoms. For the FPGA implementation of the properties, we use a two-region approach for accurately computing the properties over the entire domain, employ deep pipelines and fixed-point for all our calculations guaranteeing the accuracy required for our simulation.
Using Functional Languages and Declarative Programming to analyze ROOT data: LINQtoROOT
NASA Astrophysics Data System (ADS)
Watts, Gordon
2015-05-01
Modern high energy physics analysis is complex. It typically requires multiple passes over different datasets, and is often held together with a series of scripts and programs. For example, one has to first reweight the jet energy spectrum in Monte Carlo to match data before plots of any other jet related variable can be made. This requires a pass over the Monte Carlo and the Data to derive the reweighting, and then another pass over the Monte Carlo to plot the variables the analyser is really interested in. With most modern ROOT based tools this requires separate analysis loops for each pass, and script files to glue to the results of the two analysis loops together. A framework has been developed that uses the functional and declarative features of the C# language and its Language Integrated Query (LINQ) extensions to declare the analysis. The framework uses language tools to convert the analysis into C++ and runs ROOT or PROOF as a backend to get the results. This gives the analyser the full power of an object-oriented programming language to put together the analysis and at the same time the speed of C++ for the analysis loop. The tool allows one to incorporate C++ algorithms written for ROOT by others. A by-product of the design is the ability to cache results between runs, dramatically reducing the cost of adding one-more-plot and also to keep a complete record associated with each plot for data preservation reasons. The code is mature enough to have been used in ATLAS analyses. The package is open source and available on the open source site CodePlex.
Vapor-liquid phase equilibria of water modelled by a Kim-Gordon potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maerzke, Katie A.; McGrath, M. J.; Kuo, I-F W.
2009-09-07
Gibbs ensemble Monte Carlo simulations were carried out to investigate the properties of a frozen-electron-density (or Kim-Gordon, KG) model of water along the vapor-liquid coexistence curve. Because of its theoretical basis, such a KG model provides for seamless coupling to Kohn-Sham density functional theory for use in mixed quantum mechanics/molecular mechanics (QM/MM) implementations. The Gibbs ensemble simulations indicate rather limited transferability of such a simple KG model to other state points. Specifically, a KG model that was parameterized by Barker and Sprik to the properties of liquid water at 300 K, yields saturated vapor pressures and a critical temperature thatmore » are significantly under- and overestimated, respectively. We present a comprehensive density functional theory study to asses the accuracy of two popular exchange correlation functionals on the structure and density of liquid water at ambient conditions This work was supported by the US Department of Energy Office of Basic Energy Science Chemical Sciences Program. Battelle operates Pacific Northwest National Laboratory for the US Department of Energy.« less
A screening level probabilistic ecological risk assessment of PAHs in sediments of San Francisco Bay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Febbo, E.J.; Arnold, W.R.; Biddinger, G.R.
1995-12-31
As part of the Regional Monitoring Program administered by the San Francisco Estuary Institute (SFEI), sediment samples were collected at 20 stations in San Francisco Bay and analyzed to determine concentrations of 43 PAHs. These data were obtained from SFEI and used to calculate the potential risk to aquatic organisms using probabilistic modeling and Monte Carlo statistical procedures. Sediment chemistry data were used in conjunction with a sediment equilibrium model, a bioconcentration model, biota-sediment accumulation factors, and critical body burden effects concentrations to assess potential risk to bivalves. Bivalves were the chosen receptors because they lack a well-developed enzymatic systemmore » for metabolizing PAHs. Thus, they more readily accumulate PAHs and represent a species at greater risk than other taxa, such as fish and crustaceans. PAHs considered in this study span a broad range of octanol-water partition coefficients. Results indicate that risk of non-polar narcotic effects from PAHs was low in the Northern Bay Area, but higher in the South Bay near the more urbanized sections of the drainage basin.« less
NOTE: Monte Carlo simulation of correction factors for IAEA TLD holders
NASA Astrophysics Data System (ADS)
Hultqvist, Martha; Fernández-Varea, José M.; Izewska, Joanna
2010-03-01
The IAEA standard thermoluminescent dosimeter (TLD) holder has been developed for the IAEA/WHO TLD postal dose program for audits of high-energy photon beams, and it is also employed by the ESTRO-QUALity assurance network (EQUAL) and several national TLD audit networks. Factors correcting for the influence of the holder on the TL signal under reference conditions have been calculated in the present work from Monte Carlo simulations with the PENELOPE code for 60Co γ-rays and 4, 6, 10, 15, 18 and 25 MV photon beams. The simulation results are around 0.2% smaller than measured factors reported in the literature, but well within the combined standard uncertainties. The present study supports the use of the experimentally obtained holder correction factors in the determination of the absorbed dose to water from the TL readings; the factors calculated by means of Monte Carlo simulations may be adopted for the cases where there are no measured data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Critical Casimir effect for colloids close to chemically patterned substrates.
Tröndle, M; Kondrat, S; Gambassi, A; Harnau, L; Dietrich, S
2010-08-21
Colloids immersed in a critical or near-critical binary liquid mixture and close to a chemically patterned substrate are subject to normal and lateral critical Casimir forces of dominating strength. For a single colloid, we calculate these attractive or repulsive forces and the corresponding critical Casimir potentials within mean-field theory. Within this approach we also discuss the quality of the Derjaguin approximation and apply it to Monte Carlo simulation data available for the system under study. We find that the range of validity of the Derjaguin approximation is rather large and that it fails only for surface structures which are very small compared to the geometric mean of the size of the colloid and its distance from the substrate. For certain chemical structures of the substrate, the critical Casimir force acting on the colloid can change sign as a function of the distance between the particle and the substrate; this provides a mechanism for stable levitation at a certain distance which can be strongly tuned by temperature, i.e., with a sensitivity of more than 200 nm/K.
Critical frontier of the triangular Ising antiferromagnet in a field
NASA Astrophysics Data System (ADS)
Qian, Xiaofeng; Wegewijs, Maarten; Blöte, Henk W.
2004-03-01
We study the critical line of the triangular Ising antiferromagnet in an external magnetic field by means of a finite-size analysis of results obtained by transfer-matrix and Monte Carlo techniques. We compare the shape of the critical line with predictions of two different theoretical scenarios. Both scenarios, while plausible, involve assumptions. The first scenario is based on the generalization of the model to a vertex model, and the assumption that the exact analytic form of the critical manifold of this vertex model is determined by the zeroes of an O(2) gauge-invariant polynomial in the vertex weights. However, it is not possible to fit the coefficients of such polynomials of orders up to 10, such as to reproduce the numerical data for the critical points. The second theoretical prediction is based on the assumption that a renormalization mapping exists of the Ising model on the Coulomb gas, and analysis of the resulting renormalization equations. It leads to a shape of the critical line that is inconsistent with the first prediction, but consistent with the numerical data.
Bostelmann, Friederike; Hammer, Hans R.; Ortensi, Javier; ...
2015-12-30
Within the framework of the IAEA Coordinated Research Project on HTGR Uncertainty Analysis in Modeling, criticality calculations of the Very High Temperature Critical Assembly experiment were performed as the validation reference to the prismatic MHTGR-350 lattice calculations. Criticality measurements performed at several temperature points at this Japanese graphite-moderated facility were recently included in the International Handbook of Evaluated Reactor Physics Benchmark Experiments, and represent one of the few data sets available for the validation of HTGR lattice physics. Here, this work compares VHTRC criticality simulations utilizing the Monte Carlo codes Serpent and SCALE/KENO-VI. Reasonable agreement was found between Serpent andmore » KENO-VI, but only the use of the latest ENDF cross section library release, namely the ENDF/B-VII.1 library, led to an improved match with the measured data. Furthermore, the fourth beta release of SCALE 6.2/KENO-VI showed significant improvements from the current SCALE 6.1.2 version, compared to the experimental values and Serpent.« less
Nanoparticle Contrast Agents for Enhanced Microwave Imaging and Thermal Treatment of Breast Cancer
2010-10-01
continue to increase in step with de - creasing critical dimensions, electrodynamic effects directly influence high-frequency device performance, and...computational burden is significant. The Cellular Monte Carlo (CMC) method, originally de - veloped by Kometer et al. [50], was designed to reduce this...combination of a full-wave FDTD solver with a de - vice simulator based upon a stochastic transport kernel is conceptually straightforward, but the
Multilayer adsorption of C2H4 and CF4 on graphite: Grand Canonical Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Abdelatif, H.; Drir, M.
2016-11-01
We study the phase transitions in adsorbed multilayers by Grand Canonical Monte Carlo simulations (GCMC) of the lattice-gas model. The focus will be on ethylene (C2H4) and tetrafluoromethane (CF4) on a homogeneous graphite surface. Earlier simulations of these systems investigated structural properties, dynamical behaviors of adsorbed films and thermodynamic quantities such as isosteric heat. The main purpose of this study is to consider the adsorbed multilayers by the evaluation of the layering behavior, the wetting phenomena and the critical temperatures. The isotherms obtained for temperature from 50 K to 170 K reproduce a number of interesting features observed experimentally: (i) we observe an important number of layers in contrast with previous simulations, (ii) a finite number of layers at saturated pressure for low temperatures are found, (iii) the isotherms present vertical steps typical of layer-by-layer growth, at higher temperatures these distinct layers tend to disappear signifying that the film thickness increases continuously, (iv) a thin film to thick film transition near the triple point temperature is noticed. In addition to this qualitative description, quantitative information are determined including temperatures and relative pressures of layers formation, layer-critical-point temperatures and phase diagrams. Comparing the two systems, ethylene/graphite and tetrafluoromethane/graphite, we observe a qualitatively similar behavior.
Development of a Research Reactor Protocol for Neutron Multiplication Measurements
Arthur, Jennifer Ann; Bahran, Rian Mustafa; Hutchinson, Jesson D.; ...
2018-03-20
A new series of subcritical measurements has been conducted at the zero-power Walthousen Reactor Critical Facility (RCF) at Rensselaer Polytechnic Institute (RPI) using a 3He neutron multiplicity detector. The Critical and Subcritical 0-Power Experiment at Rensselaer (CaSPER) campaign establishes a protocol for advanced subcritical neutron multiplication measurements involving research reactors for validation of neutron multiplication inference techniques, Monte Carlo codes, and associated nuclear data. There has been increased attention and expanded efforts related to subcritical measurements and analyses, and this work provides yet another data set at known reactivity states that can be used in the validation of state-of-the-art Montemore » Carlo computer simulation tools. The diverse (mass, spatial, spectral) subcritical measurement configurations have been analyzed to produce parameters of interest such as singles rates, doubles rates, and leakage multiplication. MCNP ®6.2 was used to simulate the experiment and the resulting simulated data has been compared to the measured results. Comparison of the simulated and measured observables (singles rates, doubles rates, and leakage multiplication) show good agreement. This work builds upon the previous years of collaborative subcritical experiments and outlines a protocol for future subcritical neutron multiplication inference and subcriticality monitoring measurements on pool-type reactor systems.« less
Antibiotic Dosing in Continuous Renal Replacement Therapy.
Shaw, Alexander R; Mueller, Bruce A
2017-07-01
Appropriate antibiotic dosing is critical to improve outcomes in critically ill patients with sepsis. The addition of continuous renal replacement therapy makes achieving appropriate antibiotic dosing more difficult. The lack of continuous renal replacement therapy standardization results in treatment variability between patients and may influence whether appropriate antibiotic exposure is achieved. The aim of this study was to determine if continuous renal replacement therapy effluent flow rate impacts attaining appropriate antibiotic concentrations when conventional continuous renal replacement therapy antibiotic doses were used. This study used Monte Carlo simulations to evaluate the effect of effluent flow rate variance on pharmacodynamic target attainment for cefepime, ceftazidime, levofloxacin, meropenem, piperacillin, and tazobactam. Published demographic and pharmacokinetic parameters for each antibiotic were used to develop a pharmacokinetic model. Monte Carlo simulations of 5000 patients were evaluated for each antibiotic dosing regimen at the extremes of Kidney Disease: Improving Global Outcomes guidelines recommended effluent flow rates (20 and 35 mL/kg/h). The probability of target attainment was calculated using antibiotic-specific pharmacodynamic targets assessed over the first 72 hours of therapy. Most conventional published antibiotic dosing recommendations, except for levofloxacin, reach acceptable probability of target attainment rates when effluent rates of 20 or 35 mL/kg/h are used. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Transmutation of uranium and thorium in the particle field of the Quinta sub-critical assembly
NASA Astrophysics Data System (ADS)
Hashemi-Nezhad, S. R.; Asquith, N. L.; Voronko, V. A.; Sotnikov, V. V.; Zhadan, Alina; Zhuk, I. V.; Potapenko, A.; Husak, Krystsina; Chilap, V.; Adam, J.; Baldin, A.; Berlev, A.; Furman, W.; Kadykov, M.; Khushvaktov, J.; Kudashkin, I.; Mar'in, I.; Paraipan, M.; Pronskih, V.; Solnyshkin, A.; Tyutyunnikov, S.
2018-03-01
The fission rates of natural uranium and thorium were measured in the particle field of Quinta, a 512 kg natural uranium target-blanket sub-critical assembly. The Quinta assembly was irradiated with deuterons of energy 4 GeV from the Nuclotron accelerator of the Joint Institute for Nuclear Research (JINR), Dubna, Russia. Fission rates of uranium and thorium were measured using Gamma spectroscopy and fission track techniques. The production rate of 239Np was also measured. The obtained experimental results were compared with Monte Carlo predictions using the MCNPX 2.7 code employing the physics and fission-evaporation models of INCL4-ABLA, CEM03.03 and LAQGSM03.03. Some of the neutronic characteristics of the Quinta are compared with the "Energy plus Transmutation (EpT)" subcritical assembly, which is composed of a lead target and natU blanket. This comparison clearly demonstrates the importance of target material, neutron moderator and reflector types on the performance of a spallation neutron driven subcritical system. As the dimensions of the Quinta are very close to those of an optimal multi-rod-uranium target, the experimental and Monte Carlo calculation results presented in this paper provide insights on the particle field within a uranium target as well as in Accelerator Driven Systems in general.
Nie, Yifan; Liang, Chaoping; Cha, Pil-Ryung; Colombo, Luigi; Wallace, Robert M; Cho, Kyeongjae
2017-06-07
Controlled growth of crystalline solids is critical for device applications, and atomistic modeling methods have been developed for bulk crystalline solids. Kinetic Monte Carlo (KMC) simulation method provides detailed atomic scale processes during a solid growth over realistic time scales, but its application to the growth modeling of van der Waals (vdW) heterostructures has not yet been developed. Specifically, the growth of single-layered transition metal dichalcogenides (TMDs) is currently facing tremendous challenges, and a detailed understanding based on KMC simulations would provide critical guidance to enable controlled growth of vdW heterostructures. In this work, a KMC simulation method is developed for the growth modeling on the vdW epitaxy of TMDs. The KMC method has introduced full material parameters for TMDs in bottom-up synthesis: metal and chalcogen adsorption/desorption/diffusion on substrate and grown TMD surface, TMD stacking sequence, chalcogen/metal ratio, flake edge diffusion and vacancy diffusion. The KMC processes result in multiple kinetic behaviors associated with various growth behaviors observed in experiments. Different phenomena observed during vdW epitaxy process are analysed in terms of complex competitions among multiple kinetic processes. The KMC method is used in the investigation and prediction of growth mechanisms, which provide qualitative suggestions to guide experimental study.
Development of a Research Reactor Protocol for Neutron Multiplication Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer Ann; Bahran, Rian Mustafa; Hutchinson, Jesson D.
A new series of subcritical measurements has been conducted at the zero-power Walthousen Reactor Critical Facility (RCF) at Rensselaer Polytechnic Institute (RPI) using a 3He neutron multiplicity detector. The Critical and Subcritical 0-Power Experiment at Rensselaer (CaSPER) campaign establishes a protocol for advanced subcritical neutron multiplication measurements involving research reactors for validation of neutron multiplication inference techniques, Monte Carlo codes, and associated nuclear data. There has been increased attention and expanded efforts related to subcritical measurements and analyses, and this work provides yet another data set at known reactivity states that can be used in the validation of state-of-the-art Montemore » Carlo computer simulation tools. The diverse (mass, spatial, spectral) subcritical measurement configurations have been analyzed to produce parameters of interest such as singles rates, doubles rates, and leakage multiplication. MCNP ®6.2 was used to simulate the experiment and the resulting simulated data has been compared to the measured results. Comparison of the simulated and measured observables (singles rates, doubles rates, and leakage multiplication) show good agreement. This work builds upon the previous years of collaborative subcritical experiments and outlines a protocol for future subcritical neutron multiplication inference and subcriticality monitoring measurements on pool-type reactor systems.« less
Simulation of the Interactions Between Gamma-Rays and Detectors Using BSIMUL
NASA Technical Reports Server (NTRS)
Haywood, S. E.; Rester, A. C., Jr.
1996-01-01
Progress made during 1995 on the Monte-Carlo gamma-ray spectrum simulation program BSIMUL is discussed. Several features have been added, including the ability to model shield that are tapered cylinders. Several simulations were made on the Near Earth Asteroid Rendezvous detector.
A Piagetian Learning Cycle for Introductory Chemical Kinetics.
ERIC Educational Resources Information Center
Batt, Russell H.
1980-01-01
Described is a Piagetian learning cycle based on Monte Carlo modeling of several simple reaction mechanisms. Included are descriptions of learning cycle phases (exploration, invention, and discovery) and four BASIC-PLUS computer programs to be used in the explanation of chemical reacting systems. (Author/DS)
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Monte Carlo simulation of electrothermal atomization on a desktop personal computer
NASA Astrophysics Data System (ADS)
Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.
1996-07-01
Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Optimization of beam shaping assembly based on D-T neutron generator and dose evaluation for BNCT
NASA Astrophysics Data System (ADS)
Naeem, Hamza; Chen, Chaobin; Zheng, Huaqing; Song, Jing
2017-04-01
The feasibility of developing an epithermal neutron beam for a boron neutron capture therapy (BNCT) facility based on a high intensity D-T fusion neutron generator (HINEG) and using the Monte Carlo code SuperMC (Super Monte Carlo simulation program for nuclear and radiation process) is proposed in this study. The Monte Carlo code SuperMC is used to determine and optimize the final configuration of the beam shaping assembly (BSA). The optimal BSA design in a cylindrical geometry which consists of a natural uranium sphere (14 cm) as a neutron multiplier, AlF3 and TiF3 as moderators (20 cm each), Cd (1 mm) as a thermal neutron filter, Bi (5 cm) as a gamma shield, and Pb as a reflector and collimator to guide neutrons towards the exit window. The epithermal neutron beam flux of the proposed model is 5.73 × 109 n/cm2s, and other dosimetric parameters for the BNCT reported by IAEA-TECDOC-1223 have been verified. The phantom dose analysis shows that the designed BSA is accurate, efficient and suitable for BNCT applications. Thus, the Monte Carlo code SuperMC is concluded to be capable of simulating the BSA and the dose calculation for BNCT, and high epithermal flux can be achieved using proposed BSA.
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Longo, Mariaconcetta; Marchioni, Chiara; Insero, Teresa; Donnarumma, Raffaella; D'Adamo, Alessandro; Lucatelli, Pierleone; Fanelli, Fabrizio; Salvatori, Filippo Maria; Cannavale, Alessandro; Di Castro, Elisabetta
2016-03-01
This study evaluates X-ray exposure in patient undergoing abdominal extra-vascular interventional procedures by means of Digital Imaging and COmmunications in Medicine (DICOM) image headers and Monte Carlo simulation. The main aim was to assess the effective and equivalent doses, under the hypothesis of their correlation with the dose area product (DAP) measured during each examination. This allows to collect dosimetric information about each patient and to evaluate associated risks without resorting to in vivo dosimetry. The dose calculation was performed in 79 procedures through the Monte Carlo simulator PCXMC (A PC-based Monte Carlo program for calculating patient doses in medical X-ray examinations), by using the real geometrical and dosimetric irradiation conditions, automatically extracted from DICOM headers. The DAP measurements were also validated by using thermoluminescent dosemeters on an anthropomorphic phantom. The expected linear correlation between effective doses and DAP was confirmed with an R(2) of 0.974. Moreover, in order to easily calculate patient doses, conversion coefficients that relate equivalent doses to measurable quantities, such as DAP, were obtained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Highlights of the SEASAT-SASS program - A review
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1983-01-01
Some important concepts of the SEASAT-SASS program are described and some of the decisions made during the program as to methods for relating wind to backscatter are discussed. The radar scatterometer design is analyzed along with the model function, which is an empirical relationship between the backscatter value and the wind speed, wind direction, and incidence angle of the radar beam with the sea surface. The results of Monte Carlo studies of mesoscale turbulence and of studies of wind stress on the sea surface involving SASS are reviewed.
LACIE performance predictor final operational capability program description, volume 3
NASA Technical Reports Server (NTRS)
1976-01-01
The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.
Poster - 40: Treatment Verification of a 3D-printed Eye Phantom for Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunning, Chelsea; Lindsay, Clay; Unick, Nick
Purpose: Ocular melanoma is a form of eye cancer which is often treated using proton therapy. The benefit of the steep proton dose gradient can only be leveraged for accurate patient eye alignment. A treatment-planning program was written to plan on a 3D-printed anatomical eye-phantom, which was then irradiated to demonstrate the feasibility of verifying in vivo dosimetry for proton therapy using PET imaging. Methods: A 3D CAD eye model with critical organs was designed and voxelized into the Monte-Carlo transport code FLUKA. Proton dose and PET isotope production were simulated for a treatment plan of a test tumour, generatedmore » by a 2D treatment-planning program developed using NumPy and proton range tables. Next, a plastic eye-phantom was 3D-printed from the CAD model, irradiated at the TRIUMF Proton Therapy facility, and imaged using a PET scanner. Results: The treatment-planning program prediction of the range setting and modulator wheel was verified in FLUKA to treat the tumour with at least 90% dose coverage for both tissue and plastic. An axial isotope distribution of the PET isotopes was simulated in FLUKA and converted to PET scan counts. Meanwhile, the 3D-printed eye-phantom successfully yielded a PET signal. Conclusions: The 2D treatment-planning program can predict required parameters to sufficiently treat an eye tumour, which was experimentally verified using commercial 3D-printing hardware to manufacture eye-phantoms. Comparison between the simulated and measured PET isotope distribution could provide a more realistic test of eye alignment, and a variation of the method using radiographic film is being developed.« less
NASA Technical Reports Server (NTRS)
Wang, J.; Biasca, R.; Liewer, P. C.
1996-01-01
Although the existence of the critical ionization velocity (CIV) is known from laboratory experiments, no agreement has been reached as to whether CIV exists in the natural space environment. In this paper we move towards more realistic models of CIV and present the first fully three-dimensional, electromagnetic particle-in-cell Monte-Carlo collision (PIC-MCC) simulations of typical space-based CIV experiments. In our model, the released neutral gas is taken to be a spherical cloud traveling across a magnetized ambient plasma. Simulations are performed for neutral clouds with various sizes and densities. The effects of the cloud parameters on ionization yield, wave energy growth, electron heating, momentum coupling, and the three-dimensional structure of the newly ionized plasma are discussed. The simulations suggest that the quantitative characteristics of momentum transfers among the ion beam, neutral cloud, and plasma waves is the key indicator of whether CIV can occur in space. The missing factors in space-based CIV experiments may be the conditions necessary for a continuous enhancement of the beam ion momentum. For a typical shaped charge release experiment, favorable CIV conditions may exist only in a very narrow, intermediate spatial region some distance from the release point due to the effects of the cloud density and size. When CIV does occur, the newly ionized plasma from the cloud forms a very complex structure due to the combined forces from the geomagnetic field, the motion induced emf, and the polarization. Hence the detection of CIV also critically depends on the sensor location.
Computational and theoretical studies of globular proteins
NASA Astrophysics Data System (ADS)
Pagan, Daniel L.
Protein crystallization is often achieved in experiment through a trial and error approach. To date, there exists a dearth of theoretical understanding of the initial conditions necessary to promote crystallization. While a better understanding of crystallization will help to create good crystals suitable for structure analysis, it will also allow us to prevent the onset of certain diseases. The core of this thesis is to model and, ultimately, understand the phase behavior of protein particles in solution. Toward this goal, we calculate the fluid-fluid coexistence curve in the vicinity of the metastable critical point of the modified Lennard-Jones potential, where it has been shown that nucleation is increased by many orders of magnitude. We use finite-size scaling techniques and grand canonical Monte Carlo simulation methods. This has allowed us to pinpoint the critical point and subcritical region with high accuracy in spite of the critical fluctuations that hinder sampling using other Monte Carlo techniques. We also attempt to model the phase behavior of the gamma-crystallins, mutations of which have been linked to genetic cataracts. The complete phase behavior of the square well potential at the ranges of attraction lambda = 1.15 and lambda = 1.25 is calculated and compared with that of the gammaII-crystallin. The role of solvent is also important in the crystallization process and affects the phase behavior of proteins in solution. We study a model that accounts for the contribution of the solvent free-energy to the free-energy of globular proteins. This model allows us to model phase behavior that includes solvent.
NASA Technical Reports Server (NTRS)
Stalnaker, Dale K.
1993-01-01
ACARA (Availability, Cost, and Resource Allocation) is a computer program which analyzes system availability, lifecycle cost (LCC), and resupply scheduling using Monte Carlo analysis to simulate component failure and replacement. This manual was written to: (1) explain how to prepare and enter input data for use in ACARA; (2) explain the user interface, menus, input screens, and input tables; (3) explain the algorithms used in the program; and (4) explain each table and chart in the output.
2011 Annual Criticality Safety Program Performance Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrea Hoffman
The 2011 review of the INL Criticality Safety Program has determined that the program is robust and effective. The review was prepared for, and fulfills Contract Data Requirements List (CDRL) item H.20, 'Annual Criticality Safety Program performance summary that includes the status of assessments, issues, corrective actions, infractions, requirements management, training, and programmatic support.' This performance summary addresses the status of these important elements of the INL Criticality Safety Program. Assessments - Assessments in 2011 were planned and scheduled. The scheduled assessments included a Criticality Safety Program Effectiveness Review, Criticality Control Area Inspections, a Protection of Controlled Unclassified Information Inspection,more » an Assessment of Criticality Safety SQA, and this management assessment of the Criticality Safety Program. All of the assessments were completed with the exception of the 'Effectiveness Review' for SSPSF, which was delayed due to emerging work. Although minor issues were identified in the assessments, no issues or combination of issues indicated that the INL Criticality Safety Program was ineffective. The identification of issues demonstrates the importance of an assessment program to the overall health and effectiveness of the INL Criticality Safety Program. Issues and Corrective Actions - There are relatively few criticality safety related issues in the Laboratory ICAMS system. Most were identified by Criticality Safety Program assessments. No issues indicate ineffectiveness in the INL Criticality Safety Program. All of the issues are being worked and there are no imminent criticality concerns. Infractions - There was one criticality safety related violation in 2011. On January 18, 2011, it was discovered that a fuel plate bundle in the Nuclear Materials Inspection and Storage (NMIS) facility exceeded the fissionable mass limit, resulting in a technical safety requirement (TSR) violation. The TSR limits fuel plate bundles to 1085 grams U-235, which is the maximum loading of an ATR fuel element. The overloaded fuel plate bundle contained 1097 grams U-235 and was assembled under an 1100 gram U-235 limit in 1982. In 2003, the limit was reduced to 1085 grams citing a new criticality safety evaluation for ATR fuel elements. The fuel plate bundle inventories were not checked for compliance prior to implementing the reduced limit. A subsequent review of the NMIS inventory did not identify further violations. Requirements Management - The INL Criticality Safety program is organized and well documented. The source requirements for the INL Criticality Safety Program are from 10 CFR 830.204, DOE Order 420.1B, Chapter III, 'Nuclear Criticality Safety,' ANSI/ANS 8-series Industry Standards, and DOE Standards. These source requirements are documented in LRD-18001, 'INL Criticality Safety Program Requirements Manual.' The majority of the criticality safety source requirements are contained in DOE Order 420.1B because it invokes all of the ANSI/ANS 8-Series Standards. DOE Order 420.1B also invokes several DOE Standards, including DOE-STD-3007, 'Guidelines for Preparing Criticality Safety Evaluations at Department of Energy Non-Reactor Nuclear Facilities.' DOE Order 420.1B contains requirements for DOE 'Heads of Field Elements' to approve the criticality safety program and specific elements of the program, namely, the qualification of criticality staff and the method for preparing criticality safety evaluations. This was accomplished by the approval of SAR-400, 'INL Standardized Nuclear Safety Basis Manual,' Chapter 6, 'Prevention of Inadvertent Criticality.' Chapter 6 of SAR-400 contains sufficient detail and/or reference to the specific DOE and contractor documents that adequately describe the INL Criticality Safety Program per the elements specified in DOE Order 420.1B. The Safety Evaluation Report for SAR-400 specifically recognizes that the approval of SAR-400 approves the INL Criticality Safety Program. No new source requirements were released in 2011. A revision to LRD-18001 is planned for 2012 to clarify design requirements for criticality alarms. Training - Criticality Safety Engineering has developed training and provides training for many employee positions, including fissionable material handlers, facility managers, criticality safety officers, firefighters, and criticality safety engineers. Criticality safety training at the INL is a program strength. A revision to the training module developed in 2010 to supplement MFC certified fissionable material handlers (operators) training was prepared and presented in August of 2011. This training, 'Applied Science of Criticality Safety,' builds upon existing training and gives operators a better understanding of how their criticality controls are derived. Improvements to 00INL189, 'INL Criticality Safety Principles' are planned for 2012 to strengthen fissionable material handler training.« less
Continuous Easy-Plane Deconfined Phase Transition on the Kagome Lattice
NASA Astrophysics Data System (ADS)
Zhang, Xue-Feng; He, Yin-Chen; Eggert, Sebastian; Moessner, Roderich; Pollmann, Frank
2018-03-01
We use large scale quantum Monte Carlo simulations to study an extended Hubbard model of hard core bosons on the kagome lattice. In the limit of strong nearest-neighbor interactions at 1 /3 filling, the interplay between frustration and quantum fluctuations leads to a valence bond solid ground state. The system undergoes a quantum phase transition to a superfluid phase as the interaction strength is decreased. It is still under debate whether the transition is weakly first order or represents an unconventional continuous phase transition. We present a theory in terms of an easy plane noncompact C P1 gauge theory describing the phase transition at 1 /3 filling. Utilizing large scale quantum Monte Carlo simulations with parallel tempering in the canonical ensemble up to 15552 spins, we provide evidence that the phase transition is continuous at exactly 1 /3 filling. A careful finite size scaling analysis reveals an unconventional scaling behavior hinting at deconfined quantum criticality.
Vortex Loops at the Superfluid Lambda Transition: An Exact Theory?
NASA Technical Reports Server (NTRS)
Williams, Gary A.
2003-01-01
A vortex-loop theory of the superfluid lambda transition has been developed over the last decade, with many results in agreement with experiments. It is a very simple theory, consisting of just three basic equations. When it was first proposed the main uncertainty in the theory was the use Flory scaling to find the fractal dimension of the random-walking vortex loops. Recent developments in high-resolution Monte Carlo simulations have now made it possible to verify the accuracy of this Flory-scaling assumption. Although the loop theory is not yet rigorously proven to be exact, the Monte Carlo results show at the least that it is an extremely good approximation. Recent loop calculations of the critical Casimir effect in helium films in the superfluid phase T < Tc will be compared with similar perturbative RG calculations in the normal phase T > Tc; the two calculations are found to match very nicely right at Tc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less
Magnetic properties of checkerboard lattice: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.; Hamedoun, M.; Benyoussef, A.
2017-12-01
The magnetic properties of ferrimagnetic mixed-spin Ising model in the checkerboard lattice are studied using Monte Carlo simulations. The variation of total magnetization and magnetic susceptibility with the crystal field has been established. We have obtained a transition from an order to a disordered phase in some critical value of the physical variables. The reduced transition temperature is obtained for different exchange interactions. The magnetic hysteresis cycles have been established. The multiples hysteresis cycle in checkerboard lattice are obtained. The multiples hysteresis cycle have been established. The ferrimagnetic mixed-spin Ising model in checkerboard lattice is very interesting from the experimental point of view. The mixed spins system have many technological applications such as in domain opto-electronics, memory, nanomedicine and nano-biological systems. The obtained results show that that crystal field induce long-range spin-spin correlations even bellow the reduced transition temperature.
NASA Astrophysics Data System (ADS)
Motlagh, H. Nakhaei; Rezaei, G.
2018-01-01
Monte Carlo simulation is used to study the magnetic properties of mixed spin (3/2, 1) disordered binary alloys on simple cubic, hexagonal and amorphous magnetic ultra-thin films with 18 × 18 × 2 atoms. To this end, at the first approximation, the exchange coupling interaction between the spins is considered as a constant value and at the second one, the Ruderman-Kittel-Kasuya-Yosida (RKKY) model is used. Effects of concentration, structure, exchange interaction, single ion-anisotropy and the film size on the magnetic properties of disordered ferromagnetic and ferrimagnetic binary alloys are investigated. Our results indicate that the spontaneous magnetization and critical temperatures of rare earth-3d transition binary alloys are affected by these parameters. It is also found that in the ferrimagnetic state, the compensation temperature (Tcom) and the magnetic rearrangement temperature (TR) appear for some concentrations.
Structure and dynamics of Ebola virus matrix protein VP40 by a coarse-grained Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Pandey, Ras; Farmer, Barry
Ebola virus matrix protein VP40 (consisting of 326 residues) plays a critical role in viral assembly and its functions such as regulation of viral transcription, packaging, and budding of mature virions into the plasma membrane of infected cells. How does the protein VP40 go through structural evolution during the viral life cycle remains an open question? Using a coarse-grained Monte Carlo simulation we investigate the structural evolution of VP40 as a function of temperature with the input of a knowledge-based residue-residue interaction. A number local and global physical quantities (e.g. mobility profile, contact map, radius of gyration, structure factor) are analyzed with our large-scale simulations. Our preliminary data show that the structure of the protein evolves through different state with well-defined morphologies which can be identified and quantified via a detailed analysis of structure factor.
Parallelization of KENO-Va Monte Carlo code
NASA Astrophysics Data System (ADS)
Ramón, Javier; Peña, Jorge
1995-07-01
KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.
Integrated layout based Monte-Carlo simulation for design arc optimization
NASA Astrophysics Data System (ADS)
Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James
2016-03-01
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
Adjoint Fokker-Planck equation and runaway electron dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chang; Brennan, Dylan P.; Bhattacharjee, Amitava
2016-01-15
The adjoint Fokker-Planck equation method is applied to study the runaway probability function and the expected slowing-down time for highly relativistic runaway electrons, including the loss of energy due to synchrotron radiation. In direct correspondence to Monte Carlo simulation methods, the runaway probability function has a smooth transition across the runaway separatrix, which can be attributed to effect of the pitch angle scattering term in the kinetic equation. However, for the same numerical accuracy, the adjoint method is more efficient than the Monte Carlo method. The expected slowing-down time gives a novel method to estimate the runaway current decay timemore » in experiments. A new result from this work is that the decay rate of high energy electrons is very slow when E is close to the critical electric field. This effect contributes further to a hysteresis previously found in the runaway electron population.« less
Monte Carlo modelling of Schottky diode for rectenna simulation
NASA Astrophysics Data System (ADS)
Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.
2017-09-01
Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.
NASA Astrophysics Data System (ADS)
Czarnik, Piotr; Dziarmaga, Jacek; Oleś, Andrzej M.
2017-07-01
The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied to a model suffering the notorious quantum Monte Carlo sign problem—the orbital eg model with spatially highly anisotropic orbital interactions. Coarse graining of the tensor network along the inverse temperature β yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension D —limiting the amount of entanglement—is a natural refinement parameter. Increasing D we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature Tc, provide a numerically exact estimate of Tc, and give the critical exponents within 1 % of the 2D Ising universality class.
Conformal perturbation of off-critical correlators in the 3D Ising universality class
NASA Astrophysics Data System (ADS)
Caselle, M.; Costagliola, G.; Magnoli, N.
2016-07-01
Thanks to the impressive progress of conformal bootstrap methods we have now very precise estimates of both scaling dimensions and operator product expansion coefficients for several 3D universality classes. We show how to use this information to obtain similarly precise estimates for off-critical correlators using conformal perturbation. We discuss in particular the ⟨σ (r )σ (0 )⟩ , ⟨ɛ (r )ɛ (0 )⟩ and ⟨σ (r )ɛ (0 )⟩ two-point functions in the high and low temperature regimes of the 3D Ising model and evaluate the leading and next to leading terms in the s =trΔt expansion, where t is the reduced temperature. Our results for ⟨σ (r )σ (0 )⟩ agree both with Monte Carlo simulations and with a set of experimental estimates of the critical scattering function.
NASA Astrophysics Data System (ADS)
Liu, R. M.; Zhuo, W. Z.; Chen, J.; Qin, M. H.; Zeng, M.; Lu, X. B.; Gao, X. S.; Liu, J.-M.
2017-07-01
We study the thermal phase transition of the fourfold degenerate phases (the plaquette and single-stripe states) in the two-dimensional frustrated Ising model on the Shastry-Sutherland lattice using Monte Carlo simulations. The critical Ashkin-Teller-like behavior is identified both in the plaquette phase region and the single-stripe phase region. The four-state Potts critical end points differentiating the continuous transitions from the first-order ones are estimated based on finite-size-scaling analyses. Furthermore, a similar behavior of the transition to the fourfold single-stripe phase is also observed in the anisotropic triangular Ising model. Thus, this work clearly demonstrates that the transitions to the fourfold degenerate states of two-dimensional Ising antiferromagnets exhibit similar transition behavior.
Phase Behavior of Patchy Spheroidal Fluids.
NASA Astrophysics Data System (ADS)
Carpency, Thienbao
We employ Gibbs-ensemble Monte Carlo computer simulation to assess the impact of shape anisotropy and particle interaction anisotropy on the phase behavior of a colloidal (or, by extension, protein) fluid comprising patchy ellipsoidal particles, with an emphasis on critical behavior. More specifically, we obtain the fluid-fluid equilibrium phase diagram of hard prolate ellipsoids having Kern-Frenkel surface patches under a variety of conditions and study the critical behavior of these fluids as a function of particle shape parameters. It is found that the dependence of the critical temperature on aspect ratio for particles having the same volume can be described approximately in terms of patch solid angles. In addition, ordering in the fluid that is associated with particle elongation is also found to be an important factor in dictating phase behavior. The G. Harold & Leila Y. Mathers Foundation.
Cross-platform validation and analysis environment for particle physics
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.
2017-11-01
A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
Non-equilibrium relaxation in a stochastic lattice Lotka-Volterra model
NASA Astrophysics Data System (ADS)
Chen, Sheng; Täuber, Uwe C.
2016-04-01
We employ Monte Carlo simulations to study a stochastic Lotka-Volterra model on a two-dimensional square lattice with periodic boundary conditions. If the (local) prey carrying capacity is finite, there exists an extinction threshold for the predator population that separates a stable active two-species coexistence phase from an inactive state wherein only prey survive. Holding all other rates fixed, we investigate the non-equilibrium relaxation of the predator density in the vicinity of the critical predation rate. As expected, we observe critical slowing-down, i.e., a power law dependence of the relaxation time on the predation rate, and algebraic decay of the predator density at the extinction critical point. The numerically determined critical exponents are in accord with the established values of the directed percolation universality class. Following a sudden predation rate change to its critical value, one finds critical aging for the predator density autocorrelation function that is also governed by universal scaling exponents. This aging scaling signature of the active-to-absorbing state phase transition emerges at significantly earlier times than the stationary critical power laws, and could thus serve as an advanced indicator of the (predator) population’s proximity to its extinction threshold.
Fermion-induced quantum critical points.
Li, Zi-Xiang; Jiang, Yi-Fan; Jian, Shao-Kai; Yao, Hong
2017-08-22
A unified theory of quantum critical points beyond the conventional Landau-Ginzburg-Wilson paradigm remains unknown. According to Landau cubic criterion, phase transitions should be first-order when cubic terms of order parameters are allowed by symmetry in the Landau-Ginzburg free energy. Here, from renormalization group analysis, we show that second-order quantum phase transitions can occur at such putatively first-order transitions in interacting two-dimensional Dirac semimetals. As such type of Landau-forbidden quantum critical points are induced by gapless fermions, we call them fermion-induced quantum critical points. We further introduce a microscopic model of SU(N) fermions on the honeycomb lattice featuring a transition between Dirac semimetals and Kekule valence bond solids. Remarkably, our large-scale sign-problem-free Majorana quantum Monte Carlo simulations show convincing evidences of a fermion-induced quantum critical points for N = 2, 3, 4, 5 and 6, consistent with the renormalization group analysis. We finally discuss possible experimental realizations of the fermion-induced quantum critical points in graphene and graphene-like materials.Quantum phase transitions are governed by Landau-Ginzburg theory and the exceptions are rare. Here, Li et al. propose a type of Landau-forbidden quantum critical points induced by gapless fermions in two-dimensional Dirac semimetals.
Casino physics in the classroom
NASA Astrophysics Data System (ADS)
Whitney, Charles A.
1986-12-01
This article describes a seminar on the elements of probability and random processes that is computer centered and focuses on Monte Carlo simulations of processes such as coin flips, random walks on a lattice, and the behavior of photons and atoms in a gas. Representative computer programs are also described.
77 FR 42341 - Proposal Review Panel for Chemistry; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-18
... NATIONAL SCIENCE FOUNDATION Proposal Review Panel for Chemistry; Notice of Meeting In accordance... announces the following meeting: Name: ChemMatCARS Site Visit, 2011 Awardees by NSF Division of Chemistry.... Carlos Murillo, Program Director, Division of Chemistry, Room 1055, National Science Foundation, 4201...
NASA Technical Reports Server (NTRS)
1992-01-01
Summary charts of the following topics are presented: the Percentage of Critical Questions in Constrained and Robust Programs; the Executive Committee and AMAC Disposition of Critical Questions for Constrained and Robust Programs; and the Requirements for Ground-based Research and Flight Platforms for Constrained and Robust Programs. Data Tables are also presented and cover the following: critical questions from all Life Sciences Division Discipline Science Plans; critical questions listed by category and criticality; all critical questions which require ground-based research; critical questions that would utilize spacelabs listed by category and criticality; critical questions that would utilize Space Station Freedom (SSF) listed by category and criticality; critical questions that would utilize the SSF Centrifuge; facility listed by category and criticality; critical questions that would utilize a Moon base listed by category and criticality; critical questions that would utilize robotic missions listed by category and criticality; critical questions that would utilize free flyers listed by category and criticality; and critical questions by deliverables.
Phase diagram and criticality of the two-dimensional prisoner's dilemma model
NASA Astrophysics Data System (ADS)
Santos, M.; Ferreira, A. L.; Figueiredo, W.
2017-07-01
The stationary states of the prisoner's dilemma model are studied on a square lattice taking into account the role of a noise parameter in the decision-making process. Only first neighboring players—defectors and cooperators—are considered in each step of the game. Through Monte Carlo simulations we determined the phase diagrams of the model in the plane noise versus the temptation to defect for a large range of values of the noise parameter. We observed three phases: cooperators and defectors absorbing phases, and a coexistence phase between them. The phase transitions as well as the critical exponents associated with them were determined using both static and dynamical scaling laws.
Local Directed Percolation Probability in Two Dimensions
NASA Astrophysics Data System (ADS)
Inui, Norio; Konno, Norio; Komatsu, Genichi; Kameoka, Koichi
1998-01-01
Using the series expansion method and Monte Carlo simulation,we study the directed percolation probability on the square lattice Vn0=\\{ (x,y) \\in {Z}2:x+y=even, 0 ≤ y ≤ n, - y ≤ x ≤ y \\}.We calculate the local percolationprobability Pnl defined as the connection probability between theorigin and a site (0,n). The critical behavior of P∞lis clearly different from the global percolation probability P∞g characterized by a critical exponent βg.An analysis based on the Padé approximants shows βl=2βg.In addition, we find that the series expansion of P2nl can be expressed as a function of Png.
Quantum-Noise-Limited Sensitivity-Enhancement of a Passive Optical Cavity by a Fast-Light Medium
NASA Technical Reports Server (NTRS)
Smith, David D.; Luckay, H. A.; Chang, Hongrok; Myneni, Krishna
2016-01-01
We demonstrate for a passive optical cavity containing an intracavity dispersive atomic medium, the increase in scale factor near the critical anomalous dispersion is not cancelled by mode broadening or attenuation, resulting in an overall increase in the predicted quantum-noiselimited sensitivity. Enhancements of over two orders of magnitude are measured in the scale factor, which translates to greater than an order-of-magnitude enhancement in the predicted quantumnoise- limited measurement precision, by temperature tuning a low-pressure vapor of noninteracting atoms in a low-finesse cavity close to the critical anomalous dispersion condition. The predicted enhancement in sensitivity is confirmed through Monte-Carlo numerical simulations.
Quantum-Noise-Limited Sensitivity Enhancement of a Passive Optical Cavity by a Fast-Light Medium
NASA Technical Reports Server (NTRS)
Smith, David D.; Luckay, H. A.; Chang, Hongrok; Myneni, Krishna
2016-01-01
We demonstrate for a passive optical cavity containing a dispersive atomic medium, the increase in scale factor near the critical anomalous dispersion is not cancelled by mode broadening or attenuation, resulting in an overall increase in the predicted quantum-noise-limited sensitivity. Enhancements of over two orders of magnitude are measured in the scale factor, which translates to greater than an order-of-magnitude enhancement in the predicted quantum-noise-limited measurement precision, by temperature tuning a low-pressure vapor of non-interacting atoms in a low-finesse cavity close to the critical anomalous dispersion condition. The predicted enhancement in sensitivity is confirmed through Monte-Carlo numerical simulations.
Nonequilibrium surface growth in a hybrid inorganic-organic system
NASA Astrophysics Data System (ADS)
Kleppmann, Nicola; Klapp, Sabine H. L.
2016-12-01
Using kinetic Monte Carlo simulations, we show that molecular morphologies found in nonequilibrium growth can be strongly different from those at equilibrium. We study the prototypical hybrid inorganic-organic system 6P on ZnO (10 1 ¯0 ) during thin film adsorption, and find a wealth of phenomena, including reentrant growth, a critical adsorption rate, and observables that are nonmonotonous with the adsorption rate. We identify the transition from lying to standing molecules with a critical cluster size and discuss the competition of time scales during growth in terms of a rate-equation approach. Our results form a basis for understanding and predicting collective orientational ordering during growth in hybrid material systems.
Critical Care Nurses' Reasons for Poor Attendance at a Continuous Professional Development Program.
Viljoen, Myra; Coetzee, Isabel; Heyns, Tanya
2016-12-01
Society demands competent and safe health care, which obligates professionals to deliver quality patient care using current knowledge and skills. Participation in continuous professional development programs is a way to ensure quality nursing care. Despite the importance of continuous professional development, however, critical care nurse practitioners' attendance rates at these programs is low. To explore critical care nurses' reasons for their unsatisfactory attendance at a continuous professional development program. A nominal group technique was used as a consensus method to involve the critical care nurses and provide them the opportunity to reflect on their experiences and challenges related to the current continuous professional development program for the critical care units. Participants were 14 critical care nurses from 3 critical care units in 1 private hospital. The consensus was that the central theme relating to the unsatisfactory attendance at the continuous professional development program was attitude. In order of importance, the 4 contributing priorities influencing attitude were communication, continuous professional development, time constraints, and financial implications. Attitude relating to attending a continuous professional development program can be changed if critical care nurses are aware of the program's importance and are involved in the planning and implementation of a program that focuses on the nurses' individual learning needs. ©2016 American Association of Critical-Care Nurses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, J; Pelletier, C; Lee, C
Purpose: Organ doses for the Hodgkin’s lymphoma patients treated with cobalt-60 radiation were estimated using an anthropomorphic model and Monte Carlo modeling. Methods: A cobalt-60 treatment unit modeled in the BEAMnrc Monte Carlo code was used to produce phase space data. The Monte Carlo simulation was verified with percent depth dose measurement in water at various field sizes. Radiation transport through the lung blocks were modeled by adjusting the weights of phase space data. We imported a precontoured adult female hybrid model and generated a treatment plan. The adjusted phase space data and the human model were imported to themore » XVMC Monte Carlo code for dose calculation. The organ mean doses were estimated and dose volume histograms were plotted. Results: The percent depth dose agreement between measurement and calculation in water phantom was within 2% for all field sizes. The mean organ doses of heart, left breast, right breast, and spleen for the selected case were 44.3, 24.1, 14.6 and 3.4 Gy, respectively with the midline prescription dose of 40.0 Gy. Conclusion: Organ doses were estimated for the patient group whose threedimensional images are not available. This development may open the door to more accurate dose reconstruction and estimates of uncertainties in secondary cancer risk for Hodgkin’s lymphoma patients. This work was partially supported by the intramural research program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.« less
NASA Astrophysics Data System (ADS)
Cortés, Joaquin; Valencia, Eliana
1997-07-01
Monte Carlo experiments are used to investigate the adsorption of argon on a heterogeneous solid with a periodic distribution of surface energy. A study is made of the relation between the adsorbate molecule's diameter and the distance between the sites of maximum surface energy on the critical temperature, the observed phase changes, and the commensurability of the surface phase structure determined in the simulation.
SU-F-T-281: Monte Carlo Investigation of Sources of Dosimetric Discrepancies with 2D Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afifi, M; Deiab, N; El-Farrash, A
2016-06-15
Purpose: Intensity modulated radiation therapy (IMRT) poses a number of challenges for properly measuring commissioning data and quality assurance (QA). Understanding the limitations and use of dosimeters to measure these dose distributions is critical to safe IMRT implementation. In this work, we used Monte Carlo simulations to investigate the possible sources of discrepancy between our measurement with 2D array system and our dose calculation using our treatment planning system (TPS). Material and Methods: MCBEAM and MCSIM Monte Carlo codes were used for treatment head simulation and phantom dose calculation. Accurate modeling of a 6MV beam from Varian trilogy machine wasmore » verified by comparing simulated and measured percentage depth doses and profiles. Dose distribution inside the 2D array was calculated using Monte Carlo simulations and our TPS. Then Cross profiles for different field sizes were compared with actual measurements for zero and 90° gantry angle setup. Through the analysis and comparison, we tried to determine the differences and quantify a possible angular calibration factor. Results: Minimum discrepancies was seen in the comparison between the simulated and the measured profiles for the zero gantry angles at all studied field sizes (4×4cm{sup 2}, 10×10cm{sup 2}, 15×15cm{sup 2}, and 20×20cm{sup 2}). Discrepancies between our measurements and calculations increased dramatically for the cross beam profiles at the 90° gantry angle. This could ascribe mainly to the different attenuation caused by the layer of electronics at the base behind the ion chambers in the 2D array. The degree of attenuation will vary depending on the angle of beam incidence. Correction factors were implemented to correct the errors. Conclusion: Monte Carlo modeling of the 2D arrays and the derivation of angular dependence correction factors will allow for improved accuracy of the device for IMRT QA.« less
NASA Astrophysics Data System (ADS)
Mirić, J.; Bošnjaković, D.; Simonović, I.; Petrović, Z. Lj; Dujko, S.
2016-12-01
Electron attachment often imposes practical difficulties in Monte Carlo simulations, particularly under conditions of extensive losses of seed electrons. In this paper, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The two procedures are implemented in our Monte Carlo code with an aim of analyzing electron transport processes and attachment induced phenomena in sulfur-hexafluoride (SF6) and trifluoroiodomethane (CF3I). Though calculations have been performed over the entire range of reduced electric fields E/n 0 (where n 0 is the gas number density) where experimental data are available, the emphasis is placed on the analysis below critical (electric gas breakdown) fields and under conditions when transport properties are greatly affected by electron attachment. The present calculations of electron transport data for SF6 and CF3I at low E/n 0 take into account the full extent of the influence of electron attachment and spatially selective electron losses along the profile of electron swarm and attempts to produce data that may be used to model this range of conditions. The results of Monte Carlo simulations are compared to those predicted by the publicly available two term Boltzmann solver BOLSIG+. A multitude of kinetic phenomena in electron transport has been observed and discussed using physical arguments. In particular, we discuss two important phenomena: (1) the reduction of the mean energy with increasing E/n 0 for electrons in \\text{S}{{\\text{F}}6} and (2) the occurrence of negative differential conductivity (NDC) in the bulk drift velocity only for electrons in both \\text{S}{{\\text{F}}6} and CF3I. The electron energy distribution function, spatial variations of the rate coefficient for electron attachment and average energy as well as spatial profile of the swarm are calculated and used to understand these phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1988-04-01
This document provides a discussion of the development of the FORTRAN Monte Carlo program SCINFUL (for scintillator full response), a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The program may also be used to compute angle-integrated spectra of charged particles (p, d, t, /sup 3/He, and ..cap alpha..) following neutron interactions with /sup 12/C. Extensive comparisons with a variety of experimental data are given. There is generally overall good agreement (<10% differences) of results from SCINFUL calculations with measured detector responses, i.e., N(E/sub r/) vs E/sub r/more » where E/sub r/ is the response pulse height, reproduce measured detector responses with an accuracy which, at least partly, depends upon how well the experimental configuration is known. For E/sub n/ < 16 MeV and for E/sub r/ > 15% of the maximum pulse height response, calculated spectra are within +-5% of experiment on the average. For E/sub n/ up to 50 MeV similar good agreement is obtained with experiment for E/sub r/ > 30% of maximum response. For E/sub n/ up to 75 MeV the calculated shape of the response agrees with measurements, but the calculations underpredicts the measured response by up to 30%. 65 refs., 64 figs., 3 tabs.« less
The Wang-Landau Sampling Algorithm
NASA Astrophysics Data System (ADS)
Landau, David P.
2003-03-01
Over the past several decades Monte Carlo simulations[1] have evolved into a powerful tool for the study of wide-ranging problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, usually in the canonical ensemble, and enormous improvements have been made in performance through the implementation of novel algorithms. Nonetheless, difficulties arise near phase transitions, either due to critical slowing down near 2nd order transitions or to metastability near 1st order transitions, thus limiting the applicability of the method. We shall describe a new and different Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is estimated, all thermodynamic properties can be calculated at all temperatures. This approach can be extended to multi-dimensional parameter spaces and has already found use in classical models of interacting particles including systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc., as well as for quantum models. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lantz, E.; Tegen, S.
2009-08-01
Job generation has been a part of the national dialogue surrounding energy policy and renewable energy (RE) for many years. RE advocates tout the ability of renewable energy to support new job opportunities in rural America and the manufacturing sector. Others argue that spending on renewable energy is an inefficient allocation of resources and can result in job losses in the broader economy. The report, Study of the Effects on Employment of Public Aid to Renewable Energy Sources, from King Juan Carlos University in Spain, is one recent addition to this debate. This report asserts that, on average, every renewablemore » energy job in Spain 'destroyed' 2.2 jobs in the broader Spanish economy. The authors also apply this ratio to the U.S. context to estimate expected job loss from renewable energy development and policy in the United States. This memo discusses fundamental and technical limitations of the analysis by King Juan Carlos University and notes critical assumptions implicit in the ultimate conclusions of their work. The memo also includes a review of traditional employment impact analyses that rely on accepted, peer-reviewed methodologies, and it highlights specific variables that can significantly influence the results of traditional employment impact analysis.« less
NASA Astrophysics Data System (ADS)
Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio
2018-05-01
The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.
Perceptions of the use of critical thinking teaching methods.
Kowalczyk, Nina; Hackworth, Ruth; Case-Smith, Jane
2012-01-01
To identify the perceived level of competence in teaching and assessing critical thinking skills and the difficulties facing radiologic science program directors in implementing student-centered teaching methods. A total of 692 program directors received an invitation to complete an electronic survey soliciting information regarding the importance of critical thinking skills, their confidence in applying teaching methods and assessing student performance, and perceived obstacles. Statistical analysis included descriptive data, correlation coefficients, and ANOVA. Responses were received from 317 participants indicating program directors perceive critical thinking to be an essential element in the education of the student; however, they identified several areas for improvement. A high correlation was identified between the program directors' perceived level of skill and their confidence in critical thinking, and between their perceived level of skill and ability to assess the students' critical thinking. Key barriers to implementing critical thinking teaching strategies were identified. Program directors value the importance of implementing critical thinking teaching methods and perceive a need for professional development in critical thinking educational methods. Regardless of the type of educational institution in which the academic program is located, the level of education held by the program director was a significant factor regarding perceived confidence in the ability to model critical thinking skills and the ability to assess student critical thinking skills.
Monte Carlo calculations of k{sub Q}, the beam quality conversion factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, B. R.; Rogers, D. W. O.
2010-11-15
Purpose: To use EGSnrc Monte Carlo simulations to directly calculate beam quality conversion factors, k{sub Q}, for 32 cylindrical ionization chambers over a range of beam qualities and to quantify the effect of systematic uncertainties on Monte Carlo calculations of k{sub Q}. These factors are required to use the TG-51 or TRS-398 clinical dosimetry protocols for calibrating external radiotherapy beams. Methods: Ionization chambers are modeled either from blueprints or manufacturers' user's manuals. The dose-to-air in the chamber is calculated using the EGSnrc user-code egs{sub c}hamber using 11 different tabulated clinical photon spectra for the incident beams. The dose to amore » small volume of water is also calculated in the absence of the chamber at the midpoint of the chamber on its central axis. Using a simple equation, k{sub Q} is calculated from these quantities under the assumption that W/e is constant with energy and compared to TG-51 protocol and measured values. Results: Polynomial fits to the Monte Carlo calculated k{sub Q} factors as a function of beam quality expressed as %dd(10){sub x} and TPR{sub 10}{sup 20} are given for each ionization chamber. Differences are explained between Monte Carlo calculated values and values from the TG-51 protocol or calculated using the computer program used for TG-51 calculations. Systematic uncertainties in calculated k{sub Q} values are analyzed and amount to a maximum of one standard deviation uncertainty of 0.99% if one assumes that photon cross-section uncertainties are uncorrelated and 0.63% if they are assumed correlated. The largest components of the uncertainty are the constancy of W/e and the uncertainty in the cross-section for photons in water. Conclusions: It is now possible to calculate k{sub Q} directly using Monte Carlo simulations. Monte Carlo calculations for most ionization chambers give results which are comparable to TG-51 values. Discrepancies can be explained using individual Monte Carlo calculations of various correction factors which are more accurate than previously used values. For small ionization chambers with central electrodes composed of high-Z materials, the effect of the central electrode is much larger than that for the aluminum electrodes in Farmer chambers.« less
An Introduction to Computational Physics
NASA Astrophysics Data System (ADS)
Pang, Tao
2010-07-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
Parental GCA testing: how many crosses per parent?
G.R. Johnson
1998-01-01
The impact of increasing the number of crosses per parent (k) on the efficiency of roguing seed orchards (backwards selection, i.e., reselection of parents) was examined by using Monte Carlo simulation. Efficiencies were examined in light of advanced-generation Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) tree improvement programs where...
A Comparative Study of Exact versus Propensity Matching Techniques Using Monte Carlo Simulation
ERIC Educational Resources Information Center
Itang'ata, Mukaria J. J.
2013-01-01
Often researchers face situations where comparative studies between two or more programs are necessary to make causal inferences for informed policy decision-making. Experimental designs employing randomization provide the strongest evidence for causal inferences. However, many pragmatic and ethical challenges may preclude the use of randomized…
Universal Scaling in the Fan of an Unconventional Quantum Critical Point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melko, Roger G; Kaul, Ribhu
2008-01-01
We present the results of extensive finite-temperature Quantum Monte Carlo simulati ons on a SU(2) symmetric,more » $S=1/2$$ quantum antiferromagnet with a frustrating four-s pin interaction -- the so-called 'JQ' model~[Sandvik, Phys. Rev. Lett. {\\bf 98}, 22 7202 (2007)]. Our simulations, which are unbiased, free of the sign-problem and car ried out on lattice sizes containing in excess of $$1.6\\times 10^4$$ spins, indicate that N\\'eel order is destroyed through a continuous quantum transition at a critica l value of the frustrating interaction. At larger values of this coupling the param agnetic state obtained has valence-bond solid order. The scaling behavior in the 'q uantum critical fan' above the putative critical point confirms a $$z=1$ quantum pha se transition that is not in the conventional $O(3)$ universality class. Our result s are consistent with the predictions of the 'deconfined quantum criticality' scena rio.« less
Sustainable Cost Models for mHealth at Scale: Modeling Program Data from m4RH Tanzania.
Mangone, Emily R; Agarwal, Smisha; L'Engle, Kelly; Lasway, Christine; Zan, Trinity; van Beijma, Hajo; Orkis, Jennifer; Karam, Robert
2016-01-01
There is increasing evidence that mobile phone health interventions ("mHealth") can improve health behaviors and outcomes and are critically important in low-resource, low-access settings. However, the majority of mHealth programs in developing countries fail to reach scale. One reason may be the challenge of developing financially sustainable programs. The goal of this paper is to explore strategies for mHealth program sustainability and develop cost-recovery models for program implementers using 2014 operational program data from Mobile for Reproductive Health (m4RH), a national text-message (SMS) based health communication service in Tanzania. We delineated 2014 m4RH program costs and considered three strategies for cost-recovery for the m4RH program: user pay-for-service, SMS cost reduction, and strategic partnerships. These inputs were used to develop four different cost-recovery scenarios. The four scenarios leveraged strategic partnerships to reduce per-SMS program costs and create per-SMS program revenue and varied the structure for user financial contribution. Finally, we conducted break-even and uncertainty analyses to evaluate the costs and revenues of these models at the 2014 user volume (125,320) and at any possible break-even volume. In three of four scenarios, costs exceeded revenue by $94,596, $34,443, and $84,571 at the 2014 user volume. However, these costs represented large reductions (54%, 83%, and 58%, respectively) from the 2014 program cost of $203,475. Scenario four, in which the lowest per-SMS rate ($0.01 per SMS) was negotiated and users paid for all m4RH SMS sent or received, achieved a $5,660 profit at the 2014 user volume. A Monte Carlo uncertainty analysis demonstrated that break-even points were driven by user volume rather than variations in program costs. These results reveal that breaking even was only probable when all SMS costs were transferred to users and the lowest per-SMS cost was negotiated with telecom partners. While this strategy was sustainable for the implementer, a central concern is that health information may not reach those who are too poor to pay, limiting the program's reach and impact. Incorporating strategies presented here may make mHealth programs more appealing to funders and investors but need further consideration to balance sustainability, scale, and impact.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less
MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Changho; Yang, Won Sik
This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, T.D. Jr.
1996-05-01
The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less
NASA Astrophysics Data System (ADS)
Sublet, Jean-Christophe
2008-02-01
ENDF/B-VII.0, the first release of the ENDF/B-VII nuclear data library, was formally released in December 2006. Prior to this event the European JEFF-3.1 nuclear data library was distributed in April 2005, while the Japanese JENDL-3.3 library has been available since 2002. The recent releases of these neutron transport libraries and special purpose files, the updates of the processing tools and the significant progress in computer power and potency, allow today far better leaner Monte Carlo code and pointwise library integration leading to enhanced benchmarking studies. A TRIPOLI-4.4 critical assembly suite has been set up as a collection of 86 benchmarks taken principally from the International Handbook of Evaluated Criticality Benchmarks Experiments (2006 Edition). It contains cases for a variety of U and Pu fuels and systems, ranging from fast to deep thermal solutions and assemblies. It covers cases with a variety of moderators, reflectors, absorbers, spectra and geometries. The results presented show that while the most recent library ENDF/B-VII.0, which benefited from the timely development of JENDL-3.3 and JEFF-3.1, produces better overall results, it suggest clearly also that improvements are still needed. This is true in particular in Light Water Reactor applications for thermal and epithermal plutonium data for all libraries and fast uranium data for JEFF-3.1 and JENDL-3.3. It is also true to state that other domains, in which Monte Carlo code are been used, such as astrophysics, fusion, high-energy or medical, radiation transport in general benefit notably from such enhanced libraries. It is particularly noticeable in term of the number of isotopes, materials available, the overall quality of the data and the much broader energy range for which evaluated (as opposed to modeled) data are available, spanning from meV to hundreds of MeV. In pointing out the impact of the different nuclear data at the library but also the isotopic levels one could not help noticing the importance and difference of the compensating effects that result from their single usage. Library differences are still important but tend to diminish due to the ever increasing and beneficial worldwide collaboration in the field of nuclear data measurement and evaluations.
Kaddoura, Mahmoud A
2010-09-01
It is essential for nurses to develop critical thinking skills to ensure their ability to provide safe and effective care to patients with complex and variable needs in ever-changing clinical environments. To date, very few studies have been conducted to examine how nursing orientation programs develop the critical thinking skills of novice critical care nurses. Strikingly, no research studies could be found about the American Association of Critical Care Nurses Essentials of Critical Care Orientation (ECCO) program and specifically its effect on the development of nurses' critical thinking skills. This study explored the perceptions of new graduate nurses regarding factors that helped to develop their critical thinking skills throughout their 6-month orientation program in the intensive care unit. A convenient non-probability sample of eight new graduates was selected from a hospital that used the ECCO program. Data were collected with demographic questionnaires and semi-structured interviews. An exploratory qualitative research method with content analysis was used to analyze the data. The study findings showed that new graduate nurses perceived that they developed critical thinking skills that improved throughout the orientation period, although there were some challenges in the ECCO program. This study provides data that could influence the development and implementation of future nursing orientation programs. Copyright 2010, SLACK Incorporated.
Program Models A Laser Beam Focused In An Aerosol Spray
NASA Technical Reports Server (NTRS)
Barton, J. P.
1996-01-01
Monte Carlo analysis performed on packets of light. Program for Analysis of Laser Beam Focused Within Aerosol Spray (FLSPRY) developed for theoretical analysis of propagation of laser pulse optically focused within aerosol spray. Applied for example, to analyze laser ignition arrangement in which focused laser pulse used to ignite liquid aerosol fuel spray. Scattering and absorption of laser light by individual aerosol droplets evaluated by use of electromagnetic Lorenz-Mie theory. Written in FORTRAN 77 for both UNIX-based computers and DEC VAX-series computers. VAX version of program (LEW-16051). UNIX version (LEW-16065).
NASA Technical Reports Server (NTRS)
Jones, W. V.
1973-01-01
Modifications to the basic computer program for performing the simulations are reported. The major changes include: (1) extension of the calculations to include the development of cascades initiated by heavy nuclei, (2) improved treatment of the nuclear disintegrations which occur during the interactions of hadrons in heavy absorbers, (3) incorporation of accurate multi-pion final-state cross sections for various interactions at accelerator energies, (4) restructuring of the program logic so that calculations can be made for sandwich-type detectors, and (5) logic modifications related to execution of the program.
Nematic phase in the CE-regime of colossal magnetoresistive manganites
NASA Astrophysics Data System (ADS)
Ochoa, Emily; Sen, Cengiz; Dagotto, Elbio; Lamar/UTK Collaboration
We report nematic phase tendencies around the first order CE transition in the two-orbital double exchange model with Jahn-Teller phonons at electronic density n = 0 . 5 . Starting with a random state at high temperatures, we employ a careful cool-down method using a Monte Carlo algorithm. We then monitor the spin structure factor S (q) of the CE phase as a function of temperature. Near the critical temperature, S (q) grows with decreasing temperature for both right- and left-ordered CE ladders, followed by a spontaneous symmetry breaking into one or the other as the critical temperature is achieved. Below the critical temperature a pure CE state with a staggered charge order is obtained. Our results are similar to those observed in pnictides in earlier studies. Lamar University Office of Undergraduate Research, and U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division.
Three-state Potts model on non-local directed small-world lattices
NASA Astrophysics Data System (ADS)
Ferraz, Carlos Handrey Araujo; Lima, José Luiz Sousa
2017-10-01
In this paper, we study the non-local directed Small-World (NLDSW) disorder effects in the three-state Potts model as a form to capture the essential features shared by real complex systems where non-locality effects play a important role in the behavior of these systems. Using Monte Carlo techniques and finite-size scaling analysis, we estimate the infinite lattice critical temperatures and the leading critical exponents in this model. In particular, we investigate the first- to second-order phase transition crossover when NLDSW links are inserted. A cluster-flip algorithm was used to reduce the critical slowing down effect in our simulations. We find that for a NLDSW disorder densities p
The nature of the continuous non-equilibrium phase transition of Axelrod's model
NASA Astrophysics Data System (ADS)
Peres, Lucas R.; Fontanari, José F.
2015-09-01
Axelrod's model in the square lattice with nearest-neighbors interactions exhibits culturally homogeneous as well as culturally fragmented absorbing configurations. In the case in which the agents are characterized by F = 2 cultural features and each feature assumes k states drawn from a Poisson distribution of parameter q, these regimes are separated by a continuous transition at qc = 3.10 +/- 0.02 . Using Monte Carlo simulations and finite-size scaling we show that the mean density of cultural domains μ is an order parameter of the model that vanishes as μ ∼ (q - q_c)^β with β = 0.67 +/- 0.01 at the critical point. In addition, for the correlation length critical exponent we find ν = 1.63 +/- 0.04 and for Fisher's exponent, τ = 1.76 +/- 0.01 . This set of critical exponents places the continuous phase transition of Axelrod's model apart from the known universality classes of non-equilibrium lattice models.
Sustainable Cost Models for mHealth at Scale: Modeling Program Data from m4RH Tanzania
Mangone, Emily R.; Agarwal, Smisha; L’Engle, Kelly; Lasway, Christine; Zan, Trinity; van Beijma, Hajo; Orkis, Jennifer; Karam, Robert
2016-01-01
Background There is increasing evidence that mobile phone health interventions (“mHealth”) can improve health behaviors and outcomes and are critically important in low-resource, low-access settings. However, the majority of mHealth programs in developing countries fail to reach scale. One reason may be the challenge of developing financially sustainable programs. The goal of this paper is to explore strategies for mHealth program sustainability and develop cost-recovery models for program implementers using 2014 operational program data from Mobile for Reproductive Health (m4RH), a national text-message (SMS) based health communication service in Tanzania. Methods We delineated 2014 m4RH program costs and considered three strategies for cost-recovery for the m4RH program: user pay-for-service, SMS cost reduction, and strategic partnerships. These inputs were used to develop four different cost-recovery scenarios. The four scenarios leveraged strategic partnerships to reduce per-SMS program costs and create per-SMS program revenue and varied the structure for user financial contribution. Finally, we conducted break-even and uncertainty analyses to evaluate the costs and revenues of these models at the 2014 user volume (125,320) and at any possible break-even volume. Results In three of four scenarios, costs exceeded revenue by $94,596, $34,443, and $84,571 at the 2014 user volume. However, these costs represented large reductions (54%, 83%, and 58%, respectively) from the 2014 program cost of $203,475. Scenario four, in which the lowest per-SMS rate ($0.01 per SMS) was negotiated and users paid for all m4RH SMS sent or received, achieved a $5,660 profit at the 2014 user volume. A Monte Carlo uncertainty analysis demonstrated that break-even points were driven by user volume rather than variations in program costs. Conclusions These results reveal that breaking even was only probable when all SMS costs were transferred to users and the lowest per-SMS cost was negotiated with telecom partners. While this strategy was sustainable for the implementer, a central concern is that health information may not reach those who are too poor to pay, limiting the program’s reach and impact. Incorporating strategies presented here may make mHealth programs more appealing to funders and investors but need further consideration to balance sustainability, scale, and impact. PMID:26824747
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Cross-platform validation and analysis environment for particle physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.
A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less
Martinovich, Viviana
2016-01-01
This text is part of a series of interviews that seek to explore diverse editing and publication experiences and the similar difficulties Latin America journals face, in order to begin to encounter contextualized solutions that articulate previously isolated efforts. In this interview, carried out in July 2015 in the Instituto de Salud Colectiva [Institute of Collective Health] of the Universidad Nacional de Lanús, Carlos Augusto Monteiro speaks to us about funding, work processes, technological innovations, and establishing teams and roles. He analyzes the importance of Latin American journals as a platform for spreading research relevant to national agendas, and the connection between journal performance, the quality of graduate training programs and research quality.
NASA Astrophysics Data System (ADS)
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A.
2017-12-01
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
NASA Astrophysics Data System (ADS)
Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.
2018-07-01
Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A
2017-12-28
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
Carlos Castillo-Chavez: a century ahead.
Schatz, James
2013-01-01
When the opportunity to contribute a short essay about Dr. Carlos Castillo-Chavez presented itself in the context of this wonderful birthday celebration my immediate reaction was por supuesto que sí! Sixteen years ago, I travelled to Cornell University with my colleague at the National Security Agency (NSA) Barbara Deuink to meet Carlos and hear about his vision to expand the talent pool of mathematicians in our country. Our motivation was very simple. First of all, the Agency relies heavily on mathematicians to carry out its mission. If the U.S. mathematics community is not healthy, NSA is not healthy. Keeping our country safe requires a team of the sharpest minds in the nation to tackle amazing intellectual challenges on a daily basis. Second, the Agency cares deeply about diversity. Within the mathematical sciences, students with advanced degrees from the Chicano, Latino, Native American, and African-American communities are underrepresented. It was clear that addressing this issue would require visionary leadership and a long-term commitment. Carlos had the vision for a program that would provide promising undergraduates from minority communities with an opportunity to gain confidence and expertise through meaningful research experiences while sharing in the excitement of mathematical and scientific discovery. His commitment to the venture was unquestionable and that commitment has not waivered since the inception of the Mathematics and Theoretical Biology Institute (MTBI) in 1996.
[Prevalence of breastfeeding in the city of São Carlos, São Paulo
Montrone, V G; Arantes, C I
2000-01-01
OBJECTIVE: To verify the prevalence of breastfeeding in the city of São Carlos. METHOD: For the collection and treatment of data regarding prevalence of breastfeeding we used the LACMAT 3.3 program. During Vacination National Day (1998, August 15) 3,326 persons responsible for children 2 years old or younger were interviewed. RESULTS: It was verified that 52.4% of the children under a month old were on exclusive breastfeeding. Out of 532 children under 4 months old, 73.3% were being breast fed: 37.8% exclusively breast fed and 17.3% under predominant breastfeeding. It was observed that 31.7% of the children under 4 months received some other form of nourishment, such as fruit and mush, and on the fifth month, this percentage went to 62.3%. CONCLUSIONS: The results of this study demonstrated that the situation of breastfeeding in São Carlos is far from what the WHO recommends, this confirming the need to implement actions of promotion, protection, and support to breastfeeding in the health public services of the municipality.
Kang, Yibin; Pan, Qiuhui; Wang, Xueting; He, Mingfeng
2016-01-01
In this paper, we investigate the five-species Jungle game in the framework of evolutionary game theory. We address the coexistence and biodiversity of the system using mean-field theory and Monte Carlo simulations. Then, we find that the inhibition from the bottom-level species to the top-level species can be critical factors that affect biodiversity, no matter how it is distributed, whether homogeneously well mixed or structured. We also find that predators' different preferences for food affect species' coexistence.
Implications of a quadratic stream definition in radiative transfer theory.
NASA Technical Reports Server (NTRS)
Whitney, C.
1972-01-01
An explicit definition of the radiation-stream concept is stated and applied to approximate the integro-differential equation of radiative transfer with a set of twelve coupled differential equations. Computational efficiency is enhanced by distributing the corresponding streams in three-dimensional space in a totally symmetric way. Polarization is then incorporated in this model. A computer program based on the model is briefly compared with a Monte Carlo program for simulation of horizon scans of the earth's atmosphere. It is found to be considerably faster.
Ozone measurement systems improvements studies
NASA Technical Reports Server (NTRS)
Thomas, R. W.; Guard, K.; Holland, A. C.; Spurling, J. F.
1974-01-01
Results are summarized of an initial study of techniques for measuring atmospheric ozone, carried out as the first phase of a program to improve ozone measurement techniques. The study concentrated on two measurement systems, the electro chemical cell (ECC) ozonesonde and the Dobson ozone spectrophotometer, and consisted of two tasks. The first task consisted of error modeling and system error analysis of the two measurement systems. Under the second task a Monte-Carlo model of the Dobson ozone measurement technique was developed and programmed for computer operation.
SOLPOL: A Solar Polarimeter for Hard X-Rays and Gamma-Rays
NASA Technical Reports Server (NTRS)
McConnell, Michael L.
1999-01-01
Th goal of this project was to continue the development of a hard X-ray polarimeter for studying solar flares. In earlier work (funded by a previous SR&T grant), we had already achieved several goals, including the following: 1) development of a means of producing a polarized radiation source in the lab that could be used for prototype development; 2) demonstrated the basic Compton scatter polarimeter concept using a simple laboratory setup; 3) used the laboratory results to verify our Monte Carlo simulations; and 4) investigated various detector technologies that could be incorporated into the polarimeter design. For the current one-year program, we wanted to fabricate and test a laboratory science model based on our SOLPOL (Solar Polarimeter) design. The long-term goal of this effort is to develop and test a prototype design that could be used to study flare emissions from either a balloon- or space-borne platform. The current program has achieved its goal of fabricating and testing a science model of the SOLPOL design, although additional testing of the design (and detailed comparison with Monte Carlo simulations) is still desired. This one-year program was extended by six months (no-cost extension) to cover the summer of 1999, when undergraduate student support was available to complete some of the laboratory testing.
FlexAID: Revisiting Docking on Non-Native-Complex Structures.
Gaudreault, Francis; Najmanovich, Rafael J
2015-07-27
Small-molecule protein docking is an essential tool in drug design and to understand molecular recognition. In the present work we introduce FlexAID, a small-molecule docking algorithm that accounts for target side-chain flexibility and utilizes a soft scoring function, i.e. one that is not highly dependent on specific geometric criteria, based on surface complementarity. The pairwise energy parameters were derived from a large dataset of true positive poses and negative decoys from the PDBbind database through an iterative process using Monte Carlo simulations. The prediction of binding poses is tested using the widely used Astex dataset as well as the HAP2 dataset, while performance in virtual screening is evaluated using a subset of the DUD dataset. We compare FlexAID to AutoDock Vina, FlexX, and rDock in an extensive number of scenarios to understand the strengths and limitations of the different programs as well as to reported results for Glide, GOLD, and DOCK6 where applicable. The most relevant among these scenarios is that of docking on flexible non-native-complex structures where as is the case in reality, the target conformation in the bound form is not known a priori. We demonstrate that FlexAID, unlike other programs, is robust against increasing structural variability. FlexAID obtains equivalent sampling success as GOLD and performs better than AutoDock Vina or FlexX in all scenarios against non-native-complex structures. FlexAID is better than rDock when there is at least one critical side-chain movement required upon ligand binding. In virtual screening, FlexAID results are lower on average than those of AutoDock Vina and rDock. The higher accuracy in flexible targets where critical movements are required, intuitive PyMOL-integrated graphical user interface and free source code as well as precompiled executables for Windows, Linux, and Mac OS make FlexAID a welcome addition to the arsenal of existing small-molecule protein docking methods.
The Department of Energy Nuclear Criticality Safety Program
NASA Astrophysics Data System (ADS)
Felty, James R.
2005-05-01
This paper broadly covers key events and activities from which the Department of Energy Nuclear Criticality Safety Program (NCSP) evolved. The NCSP maintains fundamental infrastructure that supports operational criticality safety programs. This infrastructure includes continued development and maintenance of key calculational tools, differential and integral data measurements, benchmark compilation, development of training resources, hands-on training, and web-based systems to enhance information preservation and dissemination. The NCSP was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 97-2, Criticality Safety, and evolved from a predecessor program, the Nuclear Criticality Predictability Program, that was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 93-2, The Need for Critical Experiment Capability. This paper also discusses the role Dr. Sol Pearlstein played in helping the Department of Energy lay the foundation for a robust and enduring criticality safety infrastructure.
An Introduction to Computational Physics - 2nd Edition
NASA Astrophysics Data System (ADS)
Pang, Tao
2006-01-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
2012-03-22
with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems
Assessing Disease Class-Specific Diagnostic Ability: A Practical Adaptive Test Approach.
ERIC Educational Resources Information Center
Papa, Frank J.; Schumacker, Randall E.
Measures of the robustness of disease class-specific diagnostic concepts could play a central role in training programs designed to assure the development of diagnostic competence. In the pilot study, the authors used disease/sign-symptom conditional probability estimates, Monte Carlo procedures, and artificial intelligence (AI) tools to create…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
NASA Astrophysics Data System (ADS)
Carpenter, Matthew H.; Jernigan, J. G.
2007-05-01
We present examples of an analysis progression consisting of a synthesis of the Photon Clean Method (Carpenter, Jernigan, Brown, Beiersdorfer 2007) and bootstrap methods to quantify errors and variations in many-parameter models. The Photon Clean Method (PCM) works well for model spaces with large numbers of parameters proportional to the number of photons, therefore a Monte Carlo paradigm is a natural numerical approach. Consequently, PCM, an "inverse Monte-Carlo" method, requires a new approach for quantifying errors as compared to common analysis methods for fitting models of low dimensionality. This presentation will explore the methodology and presentation of analysis results derived from a variety of public data sets, including observations with XMM-Newton, Chandra, and other NASA missions. Special attention is given to the visualization of both data and models including dynamic interactive presentations. This work was performed under the auspices of the Department of Energy under contract No. W-7405-Eng-48. We thank Peter Beiersdorfer and Greg Brown for their support of this technical portion of a larger program related to science with the LLNL EBIT program.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less
A measurement of global event shape distributions in the hadronic decays of the Z 0
NASA Astrophysics Data System (ADS)
Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, L. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Debu, P.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gaidot, A.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grosse-Wiesmann, P.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; von Krogh, J.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Lasota, M. M. B.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Lupu, N.; Ma, J.; MacBeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Pritchard, P. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; von der Schmitt, H.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Spreadbury, E. J.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; van den Plas, D.; Vandalen, G. J.; Vasseur, G.; Virtue, C. J.; Wagner, A.; Wahl, C.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.
1990-12-01
We present measurements of global event shape distributions in the hadronic decays of the Z 0. The data sample, corresponding to an integrated luminosity of about 1.3 pb-1, was collected with the OPAL detector at LEP. Most of the experimental distributions we present are unfolded for the finite acceptance and resolution of the OPAL detector. Through comparison with our unfolded data, we tune the parameter values of several Monte Carlo computer programs which simulate perturbative QCD and the hadronization of partons. Jetset version 7.2, Herwig version 3.4 and Ariadne version 3.1 all provide good descriptions of the experimental distributions. They in addition describe lower energy data with the parameter values adjusted at the Z 0 energy. A complete second order matrix element Monte Carlo program with a modified perturbation scale is also compared to our 91 GeV data and its parameter values are adjusted. We obtained an unfolded value for the mean charged multiplicity of 21.28±0.04±0.84, where the first error is statistical and the second is systematic.
NASA Astrophysics Data System (ADS)
Jawad, Enas A.
2018-05-01
In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.
A Monte Carlo model for the gardening of the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
The processes of movement and turnover of the lunar regolith are described by a Monte Carlo model. The movement of material by the direct cratering process is the dominant mode, but slumping is also included for angles exceeding the static angle of repose. Using a group of interrelated computer programs, a large number of properties are calculated, including topography, formation of layers, depth of the disturbed layer, nuclear-track distributions, and cosmogenic nuclides. In the most complex program, the history of a 36-point square array is followed for times up to 400 million years. The histories generated are complex and exhibit great variety. Because a crater covers much less area than its ejecta blanket, there is a tendency for the height change at a test point to exhibit periods of slow accumulation followed by sudden excavation. In general, the agreement with experiment and observation seems good, but two areas of disagreement stand out. First, the calculated surface is rougher than that observed. Second, the observed bombardment ages, of the order 400 million are shorter than expected (by perhaps a factor of 5).
Two-dimensional Ising model on random lattices with constant coordination number
NASA Astrophysics Data System (ADS)
Schrauth, Manuel; Richter, Julian A. J.; Portela, Jefferson S. E.
2018-02-01
We study the two-dimensional Ising model on networks with quenched topological (connectivity) disorder. In particular, we construct random lattices of constant coordination number and perform large-scale Monte Carlo simulations in order to obtain critical exponents using finite-size scaling relations. We find disorder-dependent effective critical exponents, similar to diluted models, showing thus no clear universal behavior. Considering the very recent results for the two-dimensional Ising model on proximity graphs and the coordination number correlation analysis suggested by Barghathi and Vojta [Phys. Rev. Lett. 113, 120602 (2014), 10.1103/PhysRevLett.113.120602], our results indicate that the planarity and connectedness of the lattice play an important role on deciding whether the phase transition is stable against quenched topological disorder.
NASA Astrophysics Data System (ADS)
Jabar, A.; Masrour, R.
2017-12-01
In this paper, we study the Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions and magnetic layer effects on the bilayer transitions of a spin-5/2 Blume-Capel model formed by two magnetic blocs separated by a non-magnetic spacer of finite thickness. The thermalization process of magnetization for systems sizes has been given. We have shown that the magnetic order in the two magnetic blocs depend on the thickness of the magnetic layer. In the total magnetization profiles, the susceptibility peaks correspond to the reduced critical temperature. This critical temperature is displaced towards higher temperatures when increasing the number of magnetic layers. In addition, we have discussed and interpreted the behaviors of the magnetic hysteresis loops.
Modeling the Crystallization of Proteins
NASA Astrophysics Data System (ADS)
Liu, Hongjun; Kumar, Sanat; Garde, Shekhar
2007-03-01
We have used molecular dynamics and monte carlo simulations to understand the pathway to protein crystallization. We find that models which ignore the patchy nature of protein-protein interactions only crystallize inside the metastable gas-lqiuid coexistence region. In this regime they crystallize through the formation of a critical nucleus. In contrast, when patchiness is introduced we find that there is no need to be inside this metastable gas-liquid boundary. Rather, crystallization occurs through an intermediate which is composed of disordered aggregates. These are formed by patchy interactions. Further, there appears to be no need for the formation of a critical nucleus. Thus the pathways for crystallization are strongly controlled by the nature of protein-protein interactions, in good agreement with current experiments.
Quantum critical charge response from higher derivatives in holography
NASA Astrophysics Data System (ADS)
Witczak-Krempa, William
2014-04-01
We extend the range of possibilities for the charge response in the quantum critical regime in 2 + 1D using holography, and compare them with field theory and recent quantum Monte Carlo results. We show that a family of (infinitely many) higher derivative terms in the gravitational bulk leads to behavior far richer than what was previously obtained. For example, we prove that the conductivity becomes unbounded, undermining previously obtained constraints. We further find a nontrivial and infinite set of theories that have a self-dual conductivity. Particle-vortex or S duality plays a key role; notably, it maps theories with a finite number of bulk terms to ones with an infinite number. Many properties, such as sum rules and stability conditions, are proven.
NASA Astrophysics Data System (ADS)
Yezli, M.; Bekhechi, S.; Hontinfinde, F.; EZ-Zahraouy, H.
2016-04-01
Two nonperturbative methods such as Monte-Carlo simulation (MC) and Transfer-Matrix Finite-Size-Scaling calculations (TMFSS) have been used to study the phase transition of the spin- 3 / 2 Blume-Emery-Griffiths model (BEG) with quadrupolar and antiferromagnetic next-nearest-neighbor exchange interactions. Ground state and finite temperature phase diagrams are obtained by means of these two methods. New degenerate phases are found and only second order phase transitions occur for all values of the parameter interactions. No sign of the intermediate phase is found from both methods. Critical exponents are also obtained from TMFSS calculations. Ising criticality and nonuniversal behaviors are observed depending on the strength of the second neighbor interaction.
Entropy production in a Glauber–Ising irreversible model with dynamical competition
NASA Astrophysics Data System (ADS)
Barbosa, Oscar A.; Tomé, Tânia
2018-06-01
An out of equilibrium Glauber–Ising model, evolving in accordance with an irreversible and stochastic Markovian dynamics, is analyzed in order to improve our comprehension concerning critical behavior and phase transitions in nonequilibrium systems. Therefore, a lattice model ruled by the competition between two Glauber dynamics acting on interlaced square lattices is proposed. Previous results have shown how the entropy production provides information about irreversibility and criticality. Mean-field approximations and Monte Carlo simulations were used in the analysis. The results obtained here show a continuous phase transition, reflected in the entropy production as a logarithmic divergence of its derivative, which suggests a shared universality class with the irreversible models invariant under the symmetry operations of the Ising model.
Shift Verification and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G
2016-09-07
This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less
Knotts, Thomas A.
2017-01-01
Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455
The origin of the criticality in meme popularity distribution on complex networks.
Kim, Yup; Park, Seokjong; Yook, Soon-Hyung
2016-03-24
Previous studies showed that the meme popularity distribution is described by a heavy-tailed distribution or a power-law, which is a characteristic feature of the criticality. Here, we study the origin of the criticality on non-growing and growing networks based on the competition induced criticality model. From the direct Mote Carlo simulations and the exact mapping into the position dependent biased random walk (PDBRW), we find that the meme popularity distribution satisfies a very robust power- law with exponent α = 3/2 if there is an innovation process. On the other hand, if there is no innovation, then we find that the meme popularity distribution is bounded and highly skewed for early transient time periods, while it satisfies a power-law with exponent α ≠ 3/2 for intermediate time periods. The exact mapping into PDBRW clearly shows that the balance between the creation of new memes by the innovation process and the extinction of old memes is the key factor for the criticality. We confirm that the balance for the criticality sustains for relatively small innovation rate. Therefore, the innovation processes with significantly influential memes should be the simple and fundamental processes which cause the critical distribution of the meme popularity in real social networks.
The origin of the criticality in meme popularity distribution on complex networks
Kim, Yup; Park, Seokjong; Yook, Soon-Hyung
2016-01-01
Previous studies showed that the meme popularity distribution is described by a heavy-tailed distribution or a power-law, which is a characteristic feature of the criticality. Here, we study the origin of the criticality on non-growing and growing networks based on the competition induced criticality model. From the direct Mote Carlo simulations and the exact mapping into the position dependent biased random walk (PDBRW), we find that the meme popularity distribution satisfies a very robust power- law with exponent α = 3/2 if there is an innovation process. On the other hand, if there is no innovation, then we find that the meme popularity distribution is bounded and highly skewed for early transient time periods, while it satisfies a power-law with exponent α ≠ 3/2 for intermediate time periods. The exact mapping into PDBRW clearly shows that the balance between the creation of new memes by the innovation process and the extinction of old memes is the key factor for the criticality. We confirm that the balance for the criticality sustains for relatively small innovation rate. Therefore, the innovation processes with significantly influential memes should be the simple and fundamental processes which cause the critical distribution of the meme popularity in real social networks. PMID:27009399
The origin of the criticality in meme popularity distribution on complex networks
NASA Astrophysics Data System (ADS)
Kim, Yup; Park, Seokjong; Yook, Soon-Hyung
2016-03-01
Previous studies showed that the meme popularity distribution is described by a heavy-tailed distribution or a power-law, which is a characteristic feature of the criticality. Here, we study the origin of the criticality on non-growing and growing networks based on the competition induced criticality model. From the direct Mote Carlo simulations and the exact mapping into the position dependent biased random walk (PDBRW), we find that the meme popularity distribution satisfies a very robust power- law with exponent α = 3/2 if there is an innovation process. On the other hand, if there is no innovation, then we find that the meme popularity distribution is bounded and highly skewed for early transient time periods, while it satisfies a power-law with exponent α ≠ 3/2 for intermediate time periods. The exact mapping into PDBRW clearly shows that the balance between the creation of new memes by the innovation process and the extinction of old memes is the key factor for the criticality. We confirm that the balance for the criticality sustains for relatively small innovation rate. Therefore, the innovation processes with significantly influential memes should be the simple and fundamental processes which cause the critical distribution of the meme popularity in real social networks.
QuTiP: An open-source Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2012-08-01
We present an object-oriented open-source framework for solving the dynamics of open quantum systems written in Python. Arbitrary Hamiltonians, including time-dependent systems, may be built up from operators and states defined by a quantum object class, and then passed on to a choice of master equation or Monte Carlo solvers. We give an overview of the basic structure for the framework before detailing the numerical simulation of open system dynamics. Several examples are given to illustrate the build up to a complete calculation. Finally, we measure the performance of our library against that of current implementations. The framework described here is particularly well suited to the fields of quantum optics, superconducting circuit devices, nanomechanics, and trapped ions, while also being ideal for use in classroom instruction. Catalogue identifier: AEMB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 16 482 No. of bytes in distributed program, including test data, etc.: 213 438 Distribution format: tar.gz Programming language: Python Computer: i386, x86-64 Operating system: Linux, Mac OSX, Windows RAM: 2+ Gigabytes Classification: 7 External routines: NumPy (http://numpy.scipy.org/), SciPy (http://www.scipy.org/), Matplotlib (http://matplotlib.sourceforge.net/) Nature of problem: Dynamics of open quantum systems. Solution method: Numerical solutions to Lindblad master equation or Monte Carlo wave function method. Restrictions: Problems must meet the criteria for using the master equation in Lindblad form. Running time: A few seconds up to several tens of minutes, depending on size of underlying Hilbert space.
NASA Astrophysics Data System (ADS)
Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.
2013-10-01
Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
DPEMC: A Monte Carlo for double diffraction
NASA Astrophysics Data System (ADS)
Boonekamp, M.; Kúcs, T.
2005-05-01
We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur. Phys. J. C 23 (2002) 311]. This program implements some of the more significant ones, enabling the simulation of central particle production through color singlet exchange between interacting protons or antiprotons. Method of solution: The Monte Carlo method is used to simulate all elementary 2→2 and 2→1 processes available in HERWIG. The color singlet exchanges implemented in DPEMC are implemented as functions reweighting the photon flux already present in HERWIG. Restriction on the complexity of the problem: The program relying extensively on HERWIG, the limitations are the same as in [G. Marchesini, B.R. Webber, G. Abbiendi, I.G. Knowles, M.H. Seymour, L. Stanco, Comput. Phys. Comm. 67 (1992) 465; G. Corcella, I.G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M. Seymour, B. Webber, JHEP 0101 (2001) 010]. Typical running time: Approximate times on a 800 MHz Pentium III: 5-20 min per 10 000 unweighted events, depending on the process under consideration.
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Can Condensing Organic Aerosols Lead to Less Cloud Particles?
NASA Astrophysics Data System (ADS)
Gao, C. Y.; Tsigaridis, K.; Bauer, S.
2017-12-01
We examined the impact of condensing organic aerosols on activated cloud number concentration in a new aerosol microphysics box model, MATRIX-VBS. The model includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) that resolves aerosol mass and number concentrations and aerosol mixing state. Preliminary results show that by including the condensation of organic aerosols, the new model (MATRIX-VBS) has less activated particles compared to the original model (MATRIX), which treats organic aerosols as non-volatile. Parameters such as aerosol chemical composition, mass and number concentrations, and particle sizes which affect activated cloud number concentration are thoroughly evaluated via a suite of Monte-Carlo simulations. The Monte-Carlo simulations also provide information on which climate-relevant parameters play a critical role in the aerosol evolution in the atmosphere. This study also helps simplifying the newly developed box model which will soon be implemented in the global model GISS ModelE as a module.
Benchmark solution for the Spencer-Lewis equation of electron transport theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.
As integrated circuits become smaller, the shielding of these sensitive components against penetrating electrons becomes extremely critical. Monte Carlo methods have traditionally been the method of choice in shielding evaluations primarily because they can incorporate a wide variety of relevant physical processes. Recently, however, as a result of a more accurate numerical representation of the highly forward peaked scattering process, S/sub n/ methods for one-dimensional problems have been shown to be at least as cost-effective in comparison with Monte Carlo methods. With the development of these deterministic methods for electron transport, a need has arisen to assess the accuracy ofmore » proposed numerical algorithms and to ensure their proper coding. It is the purpose of this presentation to develop a benchmark to the Spencer-Lewis equation describing the transport of energetic electrons in solids. The solution will take advantage of the correspondence between the Spencer-Lewis equation and the transport equation describing one-group time-dependent neutron transport.« less
Monte Carlo study of magnetization reversal in the model of a hard/soft magnetic bilayer
NASA Astrophysics Data System (ADS)
Taaev, T. A.; Khizriev, K. Sh.; Murtazaev, A. K.
2017-06-01
Magnetization reversal in the model of a hard/soft magnetic bilayer under the action of an external magnetic field has been investigated by the Monte Carlo method. Calculations have been performed for three systems: (i) the model without a soft-magnetic layer (hard-magnetic layer), (ii) the model with a soft-magnetic layer of thickness 25 atomic layers (predominantly exchange-coupled system), and (iii) with 50 (weak exchange coupling) atomic layers. The effect of a soft-magnetic phase on the magnetization reversal of the magnetic bilayer and on the formation of a 1D spin spring in the magnetic bilayer has been demonstrated. An inf lection that has been detected on the arch of the hysteresis loop only for the system with weak exchange coupling is completely determined by the behavior of the soft layer in the external magnetic field. The critical fields of magnetization reversal decrease with increasing thickness of the soft phase.
Carlacci, Louis; Millard, Charles B; Olson, Mark A
2004-10-01
The X-ray crystal structure of the reaction product of acetylcholinesterase (AChE) with the inhibitor diisopropylphosphorofluoridate (DFP) showed significant structural displacement in a loop segment of residues 287-290. To understand this conformational selection, a Monte Carlo (MC) simulation study was performed of the energy landscape for the loop segment. A computational strategy was applied by using a combined simulated annealing and room temperature Metropolis sampling approach with solvent polarization modeled by a generalized Born (GB) approximation. Results from thermal annealing reveal a landscape topology of broader basin opening and greater distribution of energies for the displaced loop conformation, while the ensemble average of conformations at 298 K favored a shift in populations toward the native by a free-energy difference in good agreement with the estimated experimental value. Residue motions along a reaction profile of loop conformational reorganization are proposed where Arg-289 is critical in determining electrostatic effects of solvent interaction versus Coulombic charging.
Monte Carlo modeling the phase diagram of magnets with the Dzyaloshinskii - Moriya interaction
NASA Astrophysics Data System (ADS)
Belemuk, A. M.; Stishov, S. M.
2017-11-01
We use classical Monte Carlo calculations to model the high-pressure behavior of the phase transition in the helical magnets. We vary values of the exchange interaction constant J and the Dzyaloshinskii-Moriya interaction constant D, which is equivalent to changing spin-spin distances, as occurs in real systems under pressure. The system under study is self-similar at D / J = constant , and its properties are defined by the single variable J / T , where T is temperature. The existence of the first order phase transition critically depends on the ratio D / J . A variation of J strongly affects the phase transition temperature and width of the fluctuation region (the ;hump;) as follows from the system self-similarity. The high-pressure behavior of the spin system depends on the evolution of the interaction constants J and D on compression. Our calculations are relevant to the high pressure phase diagrams of helical magnets MnSi and Cu2OSeO3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huš, Matej; Urbic, Tomaz, E-mail: tomaz.urbic@fkkt.uni-lj.si; Munaò, Gianmarco
Thermodynamic and structural properties of a coarse-grained model of methanol are examined by Monte Carlo simulations and reference interaction site model (RISM) integral equation theory. Methanol particles are described as dimers formed from an apolar Lennard-Jones sphere, mimicking the methyl group, and a sphere with a core-softened potential as the hydroxyl group. Different closure approximations of the RISM theory are compared and discussed. The liquid structure of methanol is investigated by calculating site-site radial distribution functions and static structure factors for a wide range of temperatures and densities. Results obtained show a good agreement between RISM and Monte Carlo simulations.more » The phase behavior of methanol is investigated by employing different thermodynamic routes for the calculation of the RISM free energy, drawing gas-liquid coexistence curves that match the simulation data. Preliminary indications for a putative second critical point between two different liquid phases of methanol are also discussed.« less
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
Dynamical traps in Wang-Landau sampling of continuous systems: Mechanism and solution
NASA Astrophysics Data System (ADS)
Koh, Yang Wei; Sim, Adelene Y. L.; Lee, Hwee Kuan
2015-08-01
We study the mechanism behind dynamical trappings experienced during Wang-Landau sampling of continuous systems reported by several authors. Trapping is caused by the random walker coming close to a local energy extremum, although the mechanism is different from that of the critical slowing-down encountered in conventional molecular dynamics or Monte Carlo simulations. When trapped, the random walker misses the entire or even several stages of Wang-Landau modification factor reduction, leading to inadequate sampling of the configuration space and a rough density of states, even though the modification factor has been reduced to very small values. Trapping is dependent on specific systems, the choice of energy bins, and the Monte Carlo step size, making it highly unpredictable. A general, simple, and effective solution is proposed where the configurations of multiple parallel Wang-Landau trajectories are interswapped to prevent trapping. We also explain why swapping frees the random walker from such traps. The efficacy of the proposed algorithm is demonstrated.
A Monte Carlo Sensitivity Analysis of CF2 and CF Radical Densities in a c-C4F8 Plasma
NASA Technical Reports Server (NTRS)
Bose, Deepak; Rauf, Shahid; Hash, D. B.; Govindan, T. R.; Meyyappan, M.
2004-01-01
A Monte Carlo sensitivity analysis is used to build a plasma chemistry model for octacyclofluorobutane (c-C4F8) which is commonly used in dielectric etch. Experimental data are used both quantitatively and quantitatively to analyze the gas phase and gas surface reactions for neutral radical chemistry. The sensitivity data of the resulting model identifies a few critical gas phase and surface aided reactions that account for most of the uncertainty in the CF2 and CF radical densities. Electron impact dissociation of small radicals (CF2 and CF) and their surface recombination reactions are found to be the rate-limiting steps in the neutral radical chemistry. The relative rates for these electron impact dissociation and surface recombination reactions are also suggested. The resulting mechanism is able to explain the measurements of CF2 and CF densities available in the literature and also their hollow spatial density profiles.
NASA Astrophysics Data System (ADS)
Bi, Jiang-lin; Wang, Wei; Li, Qi
2017-07-01
In this paper, the effects of the next-nearest neighbors exchange couplings on the magnetic and thermal properties of the ferrimagnetic mixed-spin (2, 5/2) Ising model on a 3D honeycomb lattice have been investigated by the use of Monte Carlo simulation. In particular, the influences of exchange couplings (Ja, Jb, Jan) and the single-ion anisotropy(Da) on the phase diagrams, the total magnetization, the sublattice magnetization, the total susceptibility, the internal energy and the specific heat have been discussed in detail. The results clearly show that the system can express the critical and compensation behavior within the next-nearest neighbors exchange coupling. Great deals of the M curves such as N-, Q-, P- and L-types have been discovered, owing to the competition between the exchange coupling and the temperature. Compared with other theoretical and experimental works, our results have an excellent consistency with theirs.
Yield modeling of acoustic charge transport transversal filters
NASA Technical Reports Server (NTRS)
Kenney, J. S.; May, G. S.; Hunt, W. D.
1995-01-01
This paper presents a yield model for acoustic charge transport transversal filters. This model differs from previous IC yield models in that it does not assume that individual failures of the nondestructive sensing taps necessarily cause a device failure. A redundancy in the number of taps included in the design is explained. Poisson statistics are used to describe the tap failures, weighted over a uniform defect density distribution. A representative design example is presented. The minimum number of taps needed to realize the filter is calculated, and tap weights for various numbers of redundant taps are calculated. The critical area for device failure is calculated for each level of redundancy. Yield is predicted for a range of defect densities and redundancies. To verify the model, a Monte Carlo simulation is performed on an equivalent circuit model of the device. The results of the yield model are then compared to the Monte Carlo simulation. Better than 95% agreement was obtained for the Poisson model with redundant taps ranging from 30% to 150% over the minimum.
Monte Carlo Study of Magnetic Properties of Mixed Spins in a Fullerene X 30 Y 30-Like Structure
NASA Astrophysics Data System (ADS)
Mhirech, A.; Aouini, S.; Alaoui-Ismaili, A.; Bahmad, L.
2018-03-01
In this work, inspiring form of the fullerene-C60 structures, we study the mixed X_{30} Y_{30} fullerene-like structure and investigate its magnetic properties. In a such a structure, the carbons are assumed to be replaced by magnetic atoms having spin moments σ = 1/2 and S = 1. Firstly, we elaborate the ground-state phase diagrams in different physical parameter planes. In a second stage, we investigate the exchange coupling interaction effects in the absence or presence of both external magnetic and crystal fields. Using the Monte Carlo method, we carried out a study of the system magnetic properties and the thermal behavior of such a system for the ferromagnetic case. It is found that the critical temperature increases when increasing the coupling exchange interactions. On the other hand, the coercive magnetic field increases also when increasing the coupling exchange interactions. However, this physical parameter decreases when increasing the reduced temperature.
FLUKA Monte Carlo simulations and benchmark measurements for the LHC beam loss monitors
NASA Astrophysics Data System (ADS)
Sarchiapone, L.; Brugger, M.; Dehning, B.; Kramer, D.; Stockner, M.; Vlachoudis, V.
2007-10-01
One of the crucial elements in terms of machine protection for CERN's Large Hadron Collider (LHC) is its beam loss monitoring (BLM) system. On-line loss measurements must prevent the superconducting magnets from quenching and protect the machine components from damages due to unforeseen critical beam losses. In order to ensure the BLM's design quality, in the final design phase of the LHC detailed FLUKA Monte Carlo simulations were performed for the betatron collimation insertion. In addition, benchmark measurements were carried out with LHC type BLMs installed at the CERN-EU high-energy Reference Field facility (CERF). This paper presents results of FLUKA calculations performed for BLMs installed in the collimation region, compares the results of the CERF measurement with FLUKA simulations and evaluates related uncertainties. This, together with the fact that the CERF source spectra at the respective BLM locations are comparable with those at the LHC, allows assessing the sensitivity of the performed LHC design studies.
NASA Astrophysics Data System (ADS)
Laoues, M.; Khelifi, R.; Moussa, A. S.
2015-01-01
Strontium-90 eye applicators are a beta-ray emitter with a relatively high-energy (maximum energy about 2.28 MeV and average energy about 0.9 MeV). These applicators come in different shapes and dimensions; they are used for the treatment of eye diseases. Whenever, radiation is used in treatment, dosimetry is essential. However, knowledge of the exact dose distribution is a critical decision-making to the outcome of the treatment. The main aim of our study is to simulate the dosimetry of the SIA.20 eye applicator with Monte Carlo GATE 6.1 platform and to compare the calculated results with those measured with EBT2 films. This means that GATE and EBT2 were used to quantify the surface and depths dose- rate, the relative dose profile and the dosimetric parameters in according to international recommendations. Calculated and measured results are in good agreement and they are consistent with the ICRU and NCS recommendations.
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Dynamic Conformations of Nucleosome Arrays in Solution from Small-Angle X-ray Scattering
NASA Astrophysics Data System (ADS)
Howell, Steven C.
Chromatin conformation and dynamics remains unsolved despite the critical role of the chromatin in fundamental genetic functions such as transcription, replication, and repair. At the molecular level, chromatin can be viewed as a linear array of nucleosomes, each consisting of 147 base pairs (bp) of double-stranded DNA (dsDNA) wrapped around a protein core and connected by 10 to 90 bp of linker dsDNA. Using small-angle X-ray scattering (SAXS), we investigated how the conformations of model nucleosome arrays in solution are modulated by ionic condition as well as the effect of linker histone proteins. To facilitate ensemble modeling of these SAXS measurements, we developed a simulation method that treats coarse-grained DNA as a Markov chain, then explores possible DNA conformations using Metropolis Monte Carlo (MC) sampling. This algorithm extends the functionality of SASSIE, a program used to model intrinsically disordered biological molecules, adding to the previous methods for simulating protein, carbohydrates, and single-stranded DNA. Our SAXS measurements of various nucleosome arrays together with the MC generated models provide valuable solution structure information identifying specific differences from the structure of crystallized arrays.
WaferOptics® mass volume production and reliability
NASA Astrophysics Data System (ADS)
Wolterink, E.; Demeyer, K.
2010-05-01
The Anteryon WaferOptics® Technology platform contains imaging optics designs, materials, metrologies and combined with wafer level based Semicon & MEMS production methods. WaferOptics® first required complete new system engineering. This system closes the loop between application requirement specifications, Anteryon product specification, Monte Carlo Analysis, process windows, process controls and supply reject criteria. Regarding the Anteryon product Integrated Lens Stack (ILS), new design rules, test methods and control systems were assessed, implemented, validated and customer released for mass production. This includes novel reflowable materials, mastering process, replication, bonding, dicing, assembly, metrology, reliability programs and quality assurance systems. Many of Design of Experiments were performed to assess correlations between optical performance parameters and machine settings of all process steps. Lens metrologies such as FFL, BFL, and MTF were adapted for wafer level production and wafer mapping was introduced for yield management. Test methods for screening and validating suitable optical materials were designed. Critical failure modes such as delamination and popcorning were assessed and modeled with FEM. Anteryon successfully managed to integrate the different technologies starting from single prototypes to high yield mass volume production These parallel efforts resulted in a steep yield increase from 30% to over 90% in a 8 months period.
Pycnonuclear reaction rates for binary ionic mixtures
NASA Technical Reports Server (NTRS)
Ichimaru, S.; Ogata, S.; Van Horn, H. M.
1992-01-01
Through a combination of compositional scaling arguments and examinations of Monte Carlo simulation results for the interparticle separations in binary-ionic mixture (BIM) solids, we have derived parameterized expressions for the BIM pycnonuclear rates as generalizations of those in one-component solids obtained previously by Salpeter and Van Horn and by Ogata et al. We have thereby discovered a catalyzing effect of the heavier elements, which enhances the rates of reactions among the lighter elements when the charge ratio exceeds a critical value of approximately 2.3.
Kang, Yibin; Pan, Qiuhui; Wang, Xueting; He, Mingfeng
2016-01-01
In this paper, we investigate the five-species Jungle game in the framework of evolutionary game theory. We address the coexistence and biodiversity of the system using mean-field theory and Monte Carlo simulations. Then, we find that the inhibition from the bottom-level species to the top-level species can be critical factors that affect biodiversity, no matter how it is distributed, whether homogeneously well mixed or structured. We also find that predators’ different preferences for food affect species’ coexistence. PMID:27332995
Integration of OpenMC methods into MAMMOTH and Serpent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie; DeHart, Mark; Tumulak, Aaron
OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.
Pet-Armacost, J J; Sepulveda, J; Sakude, M
1999-12-01
The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.
Elegent—An elastic event generator
NASA Astrophysics Data System (ADS)
Kašpar, J.
2014-03-01
Although elastic scattering of nucleons may look like a simple process, it presents a long-lasting challenge for theory. Due to missing hard energy scale, the perturbative QCD cannot be applied. Instead, many phenomenological/theoretical models have emerged. In this paper we present a unified implementation of some of the most prominent models in a C++ library, moreover extended to account for effects of the electromagnetic interaction. The library is complemented with a number of utilities. For instance, programs to sample many distributions of interest in four-momentum transfer squared, t, impact parameter, b, and collision energy √{s}. These distributions at ISR, Spp¯S, RHIC, Tevatron and LHC energies are available for download from the project web site. Both in the form of ROOT files and PDF figures providing comparisons among the models. The package includes also a tool for Monte-Carlo generation of elastic scattering events, which can easily be embedded in any other program framework. Catalogue identifier: AERT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERT_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 10551 No. of bytes in distributed program, including test data, etc.: 126316 Distribution format: tar.gz Programming language: C++. Computer: Any in principle, tested on x86-64 architecture. Operating system: Any in principle, tested on GNU/Linux. RAM: Strongly depends on the task, but typically below 20MB Classification: 11.6. External routines: ROOT, HepMC Nature of problem: Monte-Carlo simulation of elastic nucleon-nucleon collisions Solution method: Implementation of some of the most prominent phenomenological/theoretical models providing cumulative distribution function that is used for random event generation. Running time: Strongly depends on the task, but typically below 1 h.
NASA Astrophysics Data System (ADS)
Mimasu, Ken; Sanz, Verónica; Williams, Ciaran
2016-08-01
We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.
Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’
NASA Astrophysics Data System (ADS)
Yegin, Gultekin
2018-02-01
In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.
Air shower simulation for background estimation in muon tomography of volcanoes
NASA Astrophysics Data System (ADS)
Béné, S.; Boivin, P.; Busato, E.; Cârloganu, C.; Combaret, C.; Dupieux, P.; Fehr, F.; Gay, P.; Labazuy, P.; Laktineh, I.; Lénat, J.-F.; Miallier, D.; Mirabito, L.; Niess, V.; Portal, A.; Vulpescu, B.
2013-01-01
One of the main sources of background for the radiography of volcanoes using atmospheric muons comes from the accidental coincidences produced in the muon telescopes by charged particles belonging to the air shower generated by the primary cosmic ray. In order to quantify this background effect, Monte Carlo simulations of the showers and of the detector are developed by the TOMUVOL collaboration. As a first step, the atmospheric showers were simulated and investigated using two Monte Carlo packages, CORSIKA and GEANT4. We compared the results provided by the two programs for the muonic component of vertical proton-induced showers at three energies: 1, 10 and 100 TeV. We found that the spatial distribution and energy spectrum of the muons were in good agreement for the two codes.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus. PMID:18466597
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamberto, M; Chen, H; Huang, K
2015-06-15
Purpose To characterize the Cyberknife (CK) robotic system’s dosimetric accuracy of the delivery of MultiPlan’s Monte Carlo dose calculations using EBT3 radiochromic film inserted in a thorax phantom. Methods The CIRS XSight Lung Tracking (XLT) Phantom (model 10823) was used in this study with custom cut EBT3 film inserted in the horizontal (coronal) plane inside the lung tissue equivalent phantom. CK MultiPlan v3.5.3 with Monte Carlo dose calculation algorithm (1.5 mm grid size, 2% statistical uncertainty) was used to calculate a clinical plan for a 25-mm lung tumor lesion, as contoured by the physician, and then imported onto the XLTmore » phantom CT. Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0– 800 cGy. The test films (n=3) were irradiated using 325 cGy to the prescription point. Films were scanned 48 hours after irradiation using an Epson v700 scanner (48 bits color scan, extracted red channel only, 96 dpi). Percent absolute dose and relative isodose distribution difference relative to the planned dose were quantified using an in-house QA software program. Multiplan Monte Carlo dose calculation was validated using RCF dosimetry (EBT3) and gamma index criteria of 3%/3mm and 2%/2mm for absolute dose and relative isodose distribution measurement comparisons. Results EBT3 film measurements of the patient plans calculated with Monte Carlo in MultiPlan resulted in an absolute dose passing rate of 99.6±0.4% for the Gamma Index of 3%/3mm, 10% dose threshold, and 95.6±4.4% for 2%/2mm, 10% threshold criteria. The measured central axis absolute dose was within 1.2% (329.0±2.5 cGy) of the Monte Carlo planned dose (325.0±6.5 cGy) for that same point. Conclusion MultiPlan’s Monte Carlo dose calculation was validated using the EBT3 film absolute dosimetry for delivery in a heterogeneous thorax phantom.« less
SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies
ERIC Educational Resources Information Center
Yurdugul, Halil
2009-01-01
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
2006-05-31
dynamics (MD) and kinetic Monte Carlo ( KMC ) procedures. In 2D surface modeling our calculations project speedups of 9 orders of magnitude at 300 degrees...programming is used to perform customized statistical mechanics by bridging the different time scales of MD and KMC quickly and well. Speedups in
Critical conditions of polymer adsorption and chromatography on non-porous substrates.
Cimino, Richard T; Rasmussen, Christopher J; Brun, Yefim; Neimark, Alexander V
2016-07-15
We present a novel thermodynamic theory and Monte Carlo simulation model for adsorption of macromolecules to solid surfaces that is applied for calculating the chain partition during separation on chromatographic columns packed with non-porous particles. We show that similarly to polymer separation on porous substrates, it is possible to attain three chromatographic modes: size exclusion chromatography at very weak or no adsorption, liquid adsorption chromatography when adsorption effects prevail, and liquid chromatography at critical conditions that occurs at the critical point of adsorption. The main attention is paid to the analysis of the critical conditions, at which the retention is chain length independent. The theoretical results are verified with specially designed experiments on isocratic separation of linear polystyrenes on a column packed with non-porous particles at various solvent compositions. Without invoking any adjustable parameters related to the column and particle geometry, we describe quantitatively the observed transition between the size exclusion and adsorption separation regimes upon the variation of solvent composition, with the intermediate mode occurring at a well-defined critical point of adsorption. A relationship is established between the experimental solvent composition and the effective adsorption potential used in model simulations. Copyright © 2016 Elsevier Inc. All rights reserved.
Impact of nuclear data uncertainty on safety calculations for spent nuclear fuel geological disposal
NASA Astrophysics Data System (ADS)
Herrero, J. J.; Rochman, D.; Leray, O.; Vasiliev, A.; Pecchia, M.; Ferroukhi, H.; Caruso, S.
2017-09-01
In the design of a spent nuclear fuel disposal system, one necessary condition is to show that the configuration remains subcritical at time of emplacement but also during long periods covering up to 1,000,000 years. In the context of criticality safety applying burn-up credit, k-eff eigenvalue calculations are affected by nuclear data uncertainty mainly in the burnup calculations simulating reactor operation and in the criticality calculation for the disposal canister loaded with the spent fuel assemblies. The impact of nuclear data uncertainty should be included in the k-eff value estimation to enforce safety. Estimations of the uncertainty in the discharge compositions from the CASMO5 burn-up calculation phase are employed in the final MCNP6 criticality computations for the intact canister configuration; in between, SERPENT2 is employed to get the spent fuel composition along the decay periods. In this paper, nuclear data uncertainty was propagated by Monte Carlo sampling in the burn-up, decay and criticality calculation phases and representative values for fuel operated in a Swiss PWR plant will be presented as an estimation of its impact.
Spin chirality and polarised neutron scattering
NASA Astrophysics Data System (ADS)
Plakhty, V. P.; Maleyev, S. V.; Kulda, J.; Visser, E. D.; Wosnitza, J.; Moskvin, E. V.; Brückel, Th.; Kremer, R. K.
2001-03-01
Possibilities of polarised neutrons in studies of chiral criticality are discussed. The critical exponents β C of the average chirality below TN, as well as φ C=β C+γ C and, therefore, γ C of the chiral susceptibility above TN are determined for a XY triangular lattice antiferromagnet (TLA) CsMnBr3: β C=0.44(2) , γ C=0.84(7) . The critical behaviour of the chirality that orders at TN with a relative precision of 5×10 -4 proves that the phase transition belongs to a new chiral universality class. For the TLA CsNiCl 3 ( S=1) we found in the XY region ( B=3 T) φ C=1.24(7) in agreement with the Monte-Carlo value φ C=1.22(6) for the chiral universality class. In the easy-axis region at B=1 T, φ C=0.54(4) , and the Haldane excitations are observed in the polarisation-dependent inelastic cross section above TN. The helimagnet holmium exhibits a different chiral criticality with φ C=1.56(5) , essentially higher than for TLAs.
SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venencia, C; Garrigo, E; Cardenas, J
2014-06-01
Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy andmore » D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.« less
Critical Thinking and Disposition Toward Critical Thinking Among Physical Therapy Students.
Domenech, Manuel A; Watkins, Phillip
2015-01-01
Students who enter a physical therapist (PT) entry-level program with weak critical thinking skills may not be prepared to benefit from the educational training program or successfully engage in the future as a competent healthcare provider. Therefore, assessing PT students' entry-level critical thinking skills and/or disposition toward critical thinking may be beneficial to identifying students with poor, fair, or good critical thinking ability as one of the criteria used in the admissions process into a professional program. First-year students (n=71) from the Doctor of Physical Therapy (DPT) program at Texas Tech University Health Sciences Center completed the California Critical Thinking Skills Test (CCTST), the California Critical Thinking Dispositions Inventory (CCTDI), and demographic survey during orientation to the DPT program. Three students were lost from the CCTST (n=68), and none lost from the CCTDI (n=71). Analysis indicated that the majority of students had a positive disposition toward critical thinking, yet the overall CCTST suggested that these students were somewhat below the national average. Also, individuals taking math and science prerequisites at the community-college level tended to have lower overall CCTST scores. The entering DPT class demonstrated moderate or middle range scores in critical thinking and disposition toward critical thinking. This result does not indicate, but might suggest, the potential for learning challenges. Assessing critical thinking skills as part of the admissions process may prove advantageous.
Instability of the Present LEO Satellite Populations
NASA Technical Reports Server (NTRS)
Liou, Jer-Chyi; Johnson, Nicholas L.
2006-01-01
Several studies conducted during 1991-2001 demonstrated, with some assumed launch rates, the future unintended growth potential of the Earth satellite population, resulting from random, accidental collisions among resident space objects. In some low Earth orbit (LEO) altitude regimes where the number density of satellites is above a critical spatial density, the production rate of new breakup debris due to collisions would exceed the loss of objects due to orbital decay. A new study has been conducted in the Orbital Debris Program Office at the NASA Lyndon B. Johnson Space Center, using higher fidelity models to evaluate the current debris environment. The study assumed no satellites were launched after December 2005. A total of 150 Monte Carlo runs were carried out and analyzed. Each Monte Carlo run simulated the current debris environment and projected it 200 years into the future. The results indicate that the LEO debris environment has reached a point such that even if no further space launches were conducted, the Earth satellite population would remain relatively constant for only the next 50 years or so. Beyond that, the debris population would begin to increase noticeably, due to the production of collisional debris. Detailed analysis shows that this growth is primarily driven by high collision activities around 900 to 1000 km altitude - the region which has a very high concentration of debris at present. In reality, the satellite population growth in LEO will undoubtedly be worse than this study indicates, since spacecraft and their orbital stages will continue to be launched into space. Postmission disposal of vehicles (e.g., limiting postmission orbital lifetimes to less than 25 years) will help, but will be insufficient to constrain the Earth satellite population. To preserve better the near-Earth environment for future space activities, it might be necessary to remove existing large and massive objects from regions where high collision activities are expected.
Methodolgy For Evaluation Of Technology Impacts In Space Electric Power Systems
NASA Technical Reports Server (NTRS)
Holda, Julie
2004-01-01
The Analysis and Management branch of the Power and Propulsion Office at NASA Glenn Research Center is responsible for performing complex analyses of the space power and In-Space propulsion products developed by GRC. This work quantifies the benefits of the advanced technologies to support on-going advocacy efforts. The Power and Propulsion Office is committed to understanding how the advancement in space technologies could benefit future NASA missions. They support many diverse projects and missions throughout NASA as well as industry and academia. The area of work that we are concentrating on is space technology investment strategies. Our goal is to develop a Monte-Carlo based tool to investigate technology impacts in space electric power systems. The framework is being developed at this stage, which will be used to set up a computer simulation of a space electric power system (EPS). The outcome is expected to be a probabilistic assessment of critical technologies and potential development issues. We are developing methods for integrating existing spreadsheet-based tools into the simulation tool. Also, work is being done on defining interface protocols to enable rapid integration of future tools. Monte Carlo-based simulation programs for statistical modeling of the EPS Model. I decided to learn and evaluate Palisade's @Risk and Risk Optimizer software, and utilize it's capabilities for the Electric Power System (EPS) model. I also looked at similar software packages (JMP, SPSS, Crystal Ball, VenSim, Analytica) available from other suppliers and evaluated them. The second task was to develop the framework for the tool, in which we had to define technology characteristics using weighing factors and probability distributions. Also we had to define the simulation space and add hard and soft constraints to the model. The third task is to incorporate (preliminary) cost factors into the model. A final task is developing a cross-platform solution of this framework.
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
Impurities near an antiferromagnetic-singlet quantum critical point
Mendes-Santos, T.; Costa, N. C.; Batrouni, G.; ...
2017-02-15
Heavy-fermion systems and other strongly correlated electron materials often exhibit a competition between antiferromagnetic (AF) and singlet ground states. We examine the effect of impurities in the vicinity of such an AF-singlet quantum critical point (QCP), through an appropriately defined “impurity susceptibility” χimp, using exact quantum Monte Carlo simulations. Our key finding is a connection within a single calculational framework between AF domains induced on the singlet side of the transition and the behavior of the nuclear magnetic resonance (NMR) relaxation rate 1/T1. Furthermore, we show that local NMR measurements provide a diagnostic for the location of the QCP, whichmore » agrees remarkably well with the vanishing of the AF order parameter and large values of χimp.« less
Full Core TREAT Kinetics Demonstration Using Rattlesnake/BISON Coupling Within MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, Javier; DeHart, Mark D.; Gleicher, Frederick N.
2015-08-01
This report summarizes key aspects of research in evaluation of modeling needs for TREAT transient simulation. Using a measured TREAT critical measurement and a transient for a small, experimentally simplified core, Rattlesnake and MAMMOTH simulations are performed building from simple infinite media to a full core model. Cross sections processing methods are evaluated, various homogenization approaches are assessed and the neutronic behavior of the core studied to determine key modeling aspects. The simulation of the minimum critical core with the diffusion solver shows very good agreement with the reference Monte Carlo simulation and the experiment. The full core transient simulationmore » with thermal feedback shows a significantly lower power peak compared to the documented experimental measurement, which is not unexpected in the early stages of model development.« less
Neutron flux and power in RTP core-15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabir, Mohamad Hairie, E-mail: m-hairie@nuclearmalaysia.gov.my; Zin, Muhammad Rawi Md; Usang, Mark Dennis
PUSPATI TRIGA Reactor achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution of TRIGA core. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core withmore » literally no physical approximation. The consistency and accuracy of the developed RTP MCNP model was established by comparing calculations to the available experimental results and TRIGLAV code calculation.« less
Large decrease of fluctuations for supercooled water in hydrophobic nanoconfinement.
Strekalova, Elena G; Mazza, Marco G; Stanley, H Eugene; Franzese, Giancarlo
2011-04-08
Using Monte Carlo simulations, we study a coarse-grained model of a water layer confined in a fixed disordered matrix of hydrophobic nanoparticles at different particle concentrations c. For c=0, we find a first-order liquid-liquid phase transition (LLPT) ending in one critical point at low pressure P. For c>0, our simulations are consistent with a LLPT line ending in two critical points at low and high P. For c=25%, at high P and low temperature, we find a dramatic decrease of compressibility, thermal expansion coefficient, and specific heat. Surprisingly, the effect is present also for c as low as 2.4%. We conclude that even a small presence of hydrophobic nanoparticles can drastically suppress thermodynamic fluctuations, making the detection of the LLPT more difficult.
NASA Technical Reports Server (NTRS)
Thompson, Anne M.; Stewart, Richard W.
1991-01-01
Random photochemical reaction rates are employed in a 1D photochemical model to examine uncertainties in tropospheric concentrations and thereby determine critical kinetic processes and significant correlations. Monte Carlo computations are used to simulate different chemical environments and their related imprecisions. The most critical processes are the primary photodissociation of O3 (which initiates ozone destruction) and NO2 (which initiates ozone formation), and the OH/methane reaction is significant. Several correlations and anticorrelations between species are discussed, and the ozone/transient OH correlation is examined in detail. One important result of the modeling is that estimates of global OH are generally about 25 percent uncertain, limiting the precision of photochemical models. Techniques for reducing the imprecision are discussed which emphasize the use of species and radical species measurements.
Solid-propellant rocket motor ballistic performance variation analyses
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.
1975-01-01
Results are presented of research aimed at improving the assessment of off-nominal internal ballistic performance including tailoff and thrust imbalance of two large solid-rocket motors (SRMs) firing in parallel. Previous analyses using the Monte Carlo technique were refined to permit evaluation of the effects of radial and circumferential propellant temperature gradients. Sample evaluations of the effect of the temperature gradients are presented. A separate theoretical investigation of the effect of strain rate on the burning rate of propellant indicates that the thermoelastic coupling may cause substantial variations in burning rate during highly transient operating conditions. The Monte Carlo approach was also modified to permit the effects on performance of variation in the characteristics between lots of propellants and other materials to be evaluated. This permits the variabilities for the total SRM population to be determined. A sample case shows, however, that the effect of these between-lot variations on thrust imbalances within pairs of SRMs is minor in compariosn to the effect of the within-lot variations. The revised Monte Carlo and design analysis computer programs along with instructions including format requirements for preparation of input data and illustrative examples are presented.
A Multi-Objective Optimization Technique to Model the Pareto Front of Organic Dielectric Polymers
NASA Astrophysics Data System (ADS)
Gubernatis, J. E.; Mannodi-Kanakkithodi, A.; Ramprasad, R.; Pilania, G.; Lookman, T.
Multi-objective optimization is an area of decision making that is concerned with mathematical optimization problems involving more than one objective simultaneously. Here we describe two new Monte Carlo methods for this type of optimization in the context of their application to the problem of designing polymers with more desirable dielectric and optical properties. We present results of applying these Monte Carlo methods to a two-objective problem (maximizing the total static band dielectric constant and energy gap) and a three objective problem (maximizing the ionic and electronic contributions to the static band dielectric constant and energy gap) of a 6-block organic polymer. Our objective functions were constructed from high throughput DFT calculations of 4-block polymers, following the method of Sharma et al., Nature Communications 5, 4845 (2014) and Mannodi-Kanakkithodi et al., Scientific Reports, submitted. Our high throughput and Monte Carlo methods of analysis extend to general N-block organic polymers. This work was supported in part by the LDRD DR program of the Los Alamos National Laboratory and in part by a Multidisciplinary University Research Initiative (MURI) Grant from the Office of Naval Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1988-03-01
This document provides a complete listing of the FORTRAN progran SCINFUL, a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The incident design neutron energy range is 0.1 to 80 MeV. Preparation of input to the program is discussed as are important features of the output. Also included is a FORTRAN listing of a subsidiary program applicable to the output of SCINFUL. This user-interactive program is named SCINSPEC from which the output of SCINFUL may be reformatted into a standard spectrum form involving either equal light-unit or equalmore » protran-energy intervals. Examples of input to this program and corresponding output are given.« less
Aoun, Bachir
2016-05-05
A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.
Aoun, Bachir
2016-01-22
Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoun, Bachir
Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less
MO-FG-BRA-01: 4D Monte Carlo Simulations for Verification of Dose Delivered to a Moving Anatomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gholampourkashi, S; Cygler, J E.; The Ottawa Hospital Cancer Centre, Ottawa, ON
Purpose: To validate 4D Monte Carlo (MC) simulations of dose delivery by an Elekta Agility linear accelerator to a moving phantom. Methods: Monte Carlo simulations were performed using the 4DdefDOSXYZnrc/EGSnrc user code which samples a new geometry for each incident particle and calculates the dose in a continuously moving anatomy. A Quasar respiratory motion phantom with a lung insert containing a 3 cm diameter tumor was used for dose measurements on an Elekta Agility linac with the phantom in stationary and moving states. Dose to the center of tumor was measured using calibrated EBT3 film and the RADPOS 4D dosimetrymore » system. A VMAT plan covering the tumor was created on the static CT scan of the phantom using Monaco V.5.10.02. A validated BEAMnrc model of our Elekta Agility linac was used for Monte Carlo simulations on stationary and moving anatomies. To compare the planned and delivered doses, linac log files recorded during measurements were used for the simulations. For 4D simulations, deformation vectors that modeled the rigid translation of the lung insert were generated as input to the 4DdefDOSXYZnrc code as well as the phantom motion trace recorded with RADPOS during the measurements. Results: Monte Carlo simulations and film measurements were found to agree within 2mm/2% for 97.7% of points in the film in the static phantom and 95.5% in the moving phantom. Dose values based on film and RADPOS measurements are within 2% of each other and within 2σ of experimental uncertainties with respect to simulations. Conclusion: Our 4D Monte Carlo simulation using the defDOSXYZnrc code accurately calculates dose delivered to a moving anatomy. Future work will focus on more investigation of VMAT delivery on a moving phantom to improve the agreement between simulation and measurements, as well as establishing the accuracy of our method in a deforming anatomy. This work was supported by the Ontario Consortium of Adaptive Interventions in Radiation Oncology (OCAIRO), funded by the Ontario Research Fund Research Excellence program.« less
NASA Astrophysics Data System (ADS)
Andreou, M.; Lagopati, N.; Lyra, M.
2011-09-01
Optimum treatment planning of patients suffering from painful skeletal metastases requires accurate calculations concerning absorbed dose in metastatic lesions and critical organs, such as red marrow. Delivering high doses to tumor cells while limiting radiation dose to normal tissue, is the key for successful palliation treatment. The aim of this study is to compare the dosimetric calculations, obtained by Monte Carlo (MC) simulation and the MIRDOSE model, in therapeutic schemes of skeleton metastatic lesions, with Rhenium-186 (Sn) -HEDP and Samarium-153 -EDTMP. A bolus injection of 1295 MBq (35mCi) Re-186- HEDP was infused in 11 patients with multiple skeletal metastases. The administered dose for the 8 patients who received Sm-153 was 1 mCi /kg. Planar scintigraphic images for the two groups of patients were obtained, 24 h, 48 h and 72 h post injection, by an Elscint Apex SPX gamma camera. The images were processed, utilizing ROI quantitative methods, to determine residence times and radionuclide uptakes. Dosimetric calculations were performed using the patient specific scintigraphic data by the MIRDOSE3 code of MIRD. Also, MCNPX was employed, simulating the distribution of the radioisotope in the ROI and calculating the absorbed doses in the metastatic lesion, and in critical organs. Summarizing, there is a good agreement between the results, derived from the two pathways, the patient specific and the mathematical, with a deviation of less than 9% for planar scintigraphic data compared to MC, for both radiopharmaceuticals.
Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Threemore » of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.« less
RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program
NASA Technical Reports Server (NTRS)
Bream, Bruce L.
1993-01-01
RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.
6 CFR 29.4 - Protected Critical Infrastructure Information Program administration.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Protected Critical Infrastructure Information Management System (PCIIMS). The PCII Program Manager shall... be known as the “Protected Critical Infrastructure Information Management System” (PCIIMS), to record... 6 Domestic Security 1 2014-01-01 2014-01-01 false Protected Critical Infrastructure Information...
6 CFR 29.4 - Protected Critical Infrastructure Information Program administration.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Protected Critical Infrastructure Information Management System (PCIIMS). The PCII Program Manager shall... be known as the “Protected Critical Infrastructure Information Management System” (PCIIMS), to record... 6 Domestic Security 1 2011-01-01 2011-01-01 false Protected Critical Infrastructure Information...
6 CFR 29.4 - Protected Critical Infrastructure Information Program administration.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Protected Critical Infrastructure Information Management System (PCIIMS). The PCII Program Manager shall... be known as the “Protected Critical Infrastructure Information Management System” (PCIIMS), to record... 6 Domestic Security 1 2010-01-01 2010-01-01 false Protected Critical Infrastructure Information...
6 CFR 29.4 - Protected Critical Infrastructure Information Program administration.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Protected Critical Infrastructure Information Management System (PCIIMS). The PCII Program Manager shall... be known as the “Protected Critical Infrastructure Information Management System” (PCIIMS), to record... 6 Domestic Security 1 2012-01-01 2012-01-01 false Protected Critical Infrastructure Information...
6 CFR 29.4 - Protected Critical Infrastructure Information Program administration.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Protected Critical Infrastructure Information Management System (PCIIMS). The PCII Program Manager shall... be known as the “Protected Critical Infrastructure Information Management System” (PCIIMS), to record... 6 Domestic Security 1 2013-01-01 2013-01-01 false Protected Critical Infrastructure Information...
Computerized fracture critical and specialized bridge inspection program with NDE applications
NASA Astrophysics Data System (ADS)
Fish, Philip E.
1998-03-01
Wisconsin Department of Transportation implemented a Fracture Critical & Specialized Inspection Program in 1987. The program has a strong emphasis on Nondestructive Testing (NDT). The program is also completely computerized, using laptop computers to gather field data, digital cameras for pictures, and testing equipment with download features. Final inspection reports with detailed information can be delivered within days of the inspection. The program requires an experienced inspection team and qualified personnel. Individuals performing testing must be licensed ASNT (American Society for Nondestructive Testing) Level III and must be licensed Certified Weld Inspectors (American Welding Society). Several critical steps have been developed to assure that each inspection identifies all possible deficiencies that may be possible on a Fracture Critical or Unique Bridge. They include; review of all existing plans and maintenance history; identification of fracture critical members, identification of critical connection details, welds, & fatigue prone details, development of visual and NDE inspection plan; field inspection procedures; and a detailed formal report. The program has found several bridges with critical fatigue conditions which have resulted in replacement or major rehabilitation. In addition, remote monitoring systems have been installed on structures with serious cracking to monitor for changing conditions.
Spreading dynamics of forget-remember mechanism
NASA Astrophysics Data System (ADS)
Deng, Shengfeng; Li, Wei
2017-04-01
We study extensively the forget-remember mechanism (FRM) for message spreading, originally introduced in Eur. Phys. J. B 62, 247 (2008), 10.1140/epjb/e2008-00139-4. The freedom of specifying forget-remember functions governing the FRM can enrich the spreading dynamics to a very large extent. The master equation is derived for describing the FRM dynamics. By applying the mean field techniques, we have shown how the steady states can be reached under certain conditions, which agrees well with the Monte Carlo simulations. The distributions of forget and remember times can be explicitly given when the forget-remember functions take linear or exponential forms, which might shed some light on understanding the temporal nature of diseases like flu. For time-dependent FRM there is an epidemic threshold related to the FRM parameters. We have proven that the mean field critical transmissibility for the SIS model and the critical transmissibility for the SIR model are the lower and the the upper bounds of the critical transmissibility for the FRM model, respectively.
NASA Astrophysics Data System (ADS)
Herdeiro, Victor
2017-09-01
Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].
Gualdrini, G; Bedogni, R; Fantuzzi, E; Mariotti, F
2004-01-01
The present paper summarises the activity carried out at the ENEA Radiation Protection Institute for updating the methodologies employed for the evaluation of the neutron and photon dose to the exposed workers in case of a criticality accident, in the framework of the 'International Intercomparison of Criticality Accident Dosimetry Systems' (Silène reactor, IRSN-CEA-Valduc June 2002). The evaluation of the neutron spectra and the neutron dosimetric quantities relies on activation detectors and on unfolding algorithms. Thermoluminescent detectors are employed for the gamma dose measurement. The work is aimed at accurately characterising the measurement system and, at the same time, testing the algorithms. Useful spectral information were included, based on Monte Carlo simulations, to take into account the potential accident scenarios of practical interest. All along this exercise intercomparison a particular attention was devoted to the 'traceability' of all the experimental and computational parameters and therefore, aimed at an easy treatment by the user.
Phase transition in a spatial Lotka-Volterra model
NASA Astrophysics Data System (ADS)
Szabó, György; Czárán, Tamás
2001-06-01
Spatial evolution is investigated in a simulated system of nine competing and mutating bacterium strains, which mimics the biochemical war among bacteria capable of producing two different bacteriocins (toxins) at most. Random sequential dynamics on a square lattice is governed by very symmetrical transition rules for neighborhood invasions of sensitive strains by killers, killers by resistants, and resistants by sensitives. The community of the nine possible toxicity/resistance types undergoes a critical phase transition as the uniform transmutation rates between the types decreases below a critical value Pc above that all the nine types of strains coexist with equal frequencies. Passing the critical mutation rate from above, the system collapses into one of three topologically identical (degenerated) states, each consisting of three strain types. Of the three possible final states each accrues with equal probability and all three maintain themselves in a self-organizing polydomain structure via cyclic invasions. Our Monte Carlo simulations support that this symmetry-breaking transition belongs to the universality class of the three-state Potts model.